ArticlePDF Available

# The Berlekamp-Massey Algorithm revisited

Authors:

## Abstract

We propose a slight modification of the Berlekamp-Massey Algorithm for obtaining the minimal polynomial of a given linearly recurrent sequence. Such a modification enables to explain it in a simpler way and to adapt it to lazy evaluation. 1 P i=0 aixi, ai 2 K, we wish to compute its minimal polynomial, denoted by P(x). Recall that if P(x) is given by P(x) = d P i=0 pixi denotes such polynomial, then P(x) is the polynomial of the smallest degree such that d P i=0 piaj+i = 0; for all j in N. Let suppose that the minimal polynomial of S(x) has degree bound n. Under such hypothesis, the
The Berlekamp-Massey Algorithm revisited
Nadia Ben Atti (), Gema M. Diaz–Toca () Henri Lombardi ()
Abstract
We propose a slight modiﬁcation of the Berlekamp-Massey Algorithm for obtaining the minimal
polynomial of a given linearly recurrent sequence. Such a modiﬁcation enables to explain it in a
simpler way and to adapt it to lazy evaluation.
MSC 2000: 68W30, 15A03
Key words: Berlekamp-Massey Algorithm. Linearly recurrent sequences.
1 Introduction: The usual Berlekamp-Massey algorithm
Let Kbe an arbitrary ﬁeld. Given a linearly recurrent sequence, denoted by S(x) =
P
i=0
aixi,ai
K, we wish to compute its minimal polynomial, denoted by P(x). Recall that if P(x) is given by
P(x) =
d
P
i=0
pixidenotes such polynomial, then P(x) is the polynomial of the smallest degree such that
d
P
i=0
piaj+i= 0,for all jin N.
Let suppose that the minimal polynomial of S(x) has degree bound n. Under such hypothesis, the
Berlekamp-Massey Algorithm only requires the ﬁrst 2ncoeﬃcients of S(x) in order to compute the
minimal polynomial. Such coeﬃcients deﬁne the polynomial S=P2n1
i=0 aixi.
A large literature can be consulted nowadays in relation to the Berlekamp’s Algorithm. The (orig-
inal) Berlekamp’s Algorithm was created for decoding Bose-Chaudhuri-Hocquenghem (BCH) codes in
1968 (see [1]). One year later, the original version of this algorithm has been simpliﬁed by Massey
(see [5]). The similarity of the algorithm to the extended Euclidean Algorithm can be found in several
articles, for instance, in [2],[3], [6], [9] and [10]. Some more recent interpretations of the Berlekamp-
Massey Algorithm in terms of Hankel Matrices and Pad´e approximations can be found in [4] and
[7].
The usual interpretation of the Berlekamp-Massey Algorithm for obtaining P(x) is expressed in
pseudocode in Algorithm 1.
In practice, we must apply the simpliﬁcation of the extended Euclidean Algorithm given in [3], to
ﬁnd exactly the Berlekamp-Massey Algorithm. Such simpliﬁcation is based on the fact that initial R0
is equal to x2n.
Although Algorithm 1 is not complicated, it seems to be no easy to ﬁnd a direct and transparent
explanation for the determination of the degree of P. In the literature, we think there is a little
confusion with the diﬀerent deﬁnitions of minimal polynomial and with the diﬀerent ways of deﬁning
Equipe de Math´ematiques, CNRS UMR 6623, UFR des Sciences et Techniques, Universit´e de Franche-Come, 25
Galois Theory and Explicit Methods in Arithmetic Project HPRN-CT-2000-00114
Equipe de Math´ematiques, CNRS UMR 6623, UFR des Sciences et Techniques, Universit´e de Franche-Comt´e,
25 030 Besan¸con cedex, France, lombardi@math.univ-fcomte.fr, partially supported by the European Union funded
project RAAG CT-2001-00271
1
Algorithm 1 The Usual Berlekamp-Massey Algorithm
Input: nN. The ﬁrst 2ncoeﬃcients of a linearly recurrent sequence deﬁned over K, given by the list
[a0, a1, . . . , a2n1]. The minimal polynomial has degree bound n.
Output : The minimal polynomial Pof the sequence.
Start
Local variables : R, R0, R1, V , V0, V1, Q : polynomials in x
# initialization
R0:= x2n;R1:= P2n1
i=0 aixi;V0= 0 ;V1= 1 ;
# loop
while ndeg(R1)do
(Q, R) := quotient and remainder of R0divided by R1;
V:= V0Q V1;
V0:= V1;V1:= V;R0:= R1;R1:= R;
end while
# exit
d:= max(deg(V1),1 + deg(R1)) ;P:= xdV1(1/x); Return P:= P/leadcoeﬀ (P).
End.
the sequence. Here, we introduce a slight modiﬁcation of the algorithm which makes it more compre-
hensible and natural. We did not ﬁnd in the literature such a modiﬁcation before the ﬁrst submission
of this article (May 2004). However, we would like to add that you can also ﬁnd it in [8], published in
2005, without any reference.
2 Some good reasons to modify the usual algorithm
By the one hand, as it can be observed at the end of Algorithm 1, we have to compute the (nearly)
reverse polynomial of V1, in order to obtain the right polynomial. The following example helps us to
understand what happens:
n=d= 3,
S=a0+a1x+a2x2+a3x3+a4x4+a5x5= 1 + 2x+ 7x29x3+ 2x4+ 7x5,
Algorithm 1(3,[1,2,7,9,2,7]) P=x+x2+x3,
with V1=v0+v1x+v2x2= 49/67(1 + x+x2),
and Rsuch that S V1=Rmod x6,deg(R)=2
which implies that
coeﬀ(S V1, x, 3) = a1v2+a2v1+a3v0= 2v2+ 7v19v0= 0,
coeﬀ(S V1, x, 4) = a2v2+a3v1+a4v0= 7v29v1+ 2v0= 0,
coeﬀ(S V1, x, 5) = a3v2+a4v1+a5v0=9v2+ 2v1+ 7v0= 0.
Hence, the right degree of Pis given by the degree of the last R1plus one because xdivides P. Observe
that a0v2+a1v1+a2v0= 490/67 6= 0. We would like to obtain directly the desired polynomial from
V1.
Moreover, by the other hand, in Algorithm 1 all the ﬁrst 2 ncoeﬃcients are required to start the
usual algorithm, where nonly provides a degree bound for the minimal polynomial. Consequently,
it may be possible that the true degree of Pis much smaller that nand so, less coeﬃcients of the
sequence are required to obtain the wanted polynomial.
So, we suggest a more natural, eﬃcient and direct way to obtain P. Our idea is to consider the
polynomial b
S=P2n1
i=0 aix2n1ias the initial R1. Observe that in this case, using the same notation
2
as in Algorithm 1, the same example shows that it is not necessary to reverse the polynomial V1at
the end of the algorithm.
n=d= 3,
b
S=a0x5+a1x4+a2x3+a3x2+a4x+a5=x5+ 2 x4+ 7 x39x2+ 2 x+ 7,
Algorithm 2 (3,[1,2,7,9,2,7]) P=x+x2+x3,
with V1=v0+v1x+v2x2+v3x3=9/670(x+x2+x3),
and Rsuch that b
S V1=Rmod x6,deg(R)=2
which implies that
coeﬀ ( b
S V1, x, 3) = a2v0+a3v1+a4v2+a5v3=9v1+ 2v2+ 7v3= 0,
coeﬀ ( b
S V1, x, 4) = a1v0+a2v1+a3v2+a4v3= 7v19v2+ 2v3= 0,
coeﬀ ( b
S V1, x, 5) = a0v0+a1v1+a2v2+a3v3= 2v1+ 7v29v3= 0.
Furthermore, when nÀdeg(P), the algorithm can admit a lazy evaluation. In other words, the
algorithm can be initiated with less coeﬃcients than 2nand if the outcome does not provide the
wanted polynomial, we increase the number of coeﬃcients but remark that it is not necessary to
initiate again the algorithm because we can take advantages of the computations done before. We will
explain this application of the algorithm in Section 3.
Next, we introduce our modiﬁed Berlekamp-Massey Algorithm in pseudocode (Algorithm 2):
Algorithm 2 Modiﬁed Berlekamp-Massey Algorithm
Input: nN. The ﬁrst 2ncoeﬃcients of a linearly recurrent sequence deﬁned over K, given by the list
[a0, a1, . . . , a2n1]. The minimal polynomial has degree bound n.
Output : The minimal polynomial Pof the sequence.
Start
Local variables : R, R0, R1, V , V0, V1, Q : polynomials in x;m= 2n1: integer.
# initialization
m:= 2n1;R0:= x2n;R1:= Pm
i=0 amixi;V0= 0 ;V1= 1 ;
# loop
while ndeg(R1)do
(Q, R) := quotient and remainder of R0divided by R1;
V:= V0Q V1;
V0:= V1;V1:= V;R0:= R1;R1:= R;
end while
# exit
Return P:= V1/lc(V1);
End.
Now we prove our result. Let a= (an)nbe an arbitrary list and i, r, p N. Let Ha
i,r,p denote
the following Hankel matrix of order r×p,
Ha
i,r,p =
aiai+1 ai+2 . . . ai+p1
ai+1 ai+2 ai+p
ai+2
.
.
..
.
.
ai+r1ai+r. . . . . . ai+r+p2
and let Pa(x) be the minimal polynomial of a.
The next proposition shows the well known relation between the rank of Hankel matrix and the
sequence.
3
Proposition 1 Let abe a linearly recurrent sequence . If ahas a generating polynomial of degree
n, then the degree dof its minimal polynomial Pais equal to the rank of the Hankel matrix
Ha
0,n,n =
a0a1a2· · · an2an1
a1a2...an1an
a2.......
.
..
.
.
.
.
........
.
..
.
.
an2an1· · · · · · a2n2a2n1
an1an· · · · · · a2n1a2n2
.
The coeﬃcients of Pa(x) = xdPd1
i=0 gixiK[x]are provided by the unique solution of the linear
system
Ha
0,d,d G= Ha
d,d,1,
that is,
a2.......
.
.
.
.
........
.
.
g0
g1
g2
.
.
.
gd1
=
.
.
.
a2d1
.(1)
As an immediate corollary of Proposition 1, we have the following result.
Corollary 2 Using the notation of Proposition 1, a vector Y= (p0, . . . , pn)is solution of
Ha
0,n,n+1 Y= 0,
that is,
a0a1a2· · · an1an
a1a2...anan+1
a2.......
.
..
.
.
.
.
........
.
..
.
.
an1an· · · · · · a2n2a2n1
p0
p1
p2
.
.
.
pn1
pn
= 0 (2)
if and only if the polynomial P(x) = Pn
i=0 pixiK[x]is multiple of Pa(x).
Proof.
By Proposition 1 the dimension of Ker(Ha
0,n,n+1) is nd. For 0 jn1, let Cjdenote the
jth column of Ha
0,n,n+1, that is Cj= Ha
j,n,1= [aj, aj+1, . . . , an+j1]t. Since Pa(x) is a generating
polynomial of a, for djn1, we obtain that
CjXj1
i=jdgij+dCi= 0.
Thus the linear independent columns [g0,...,gd1,1,0, . . . , 0]t, . . . , [0, . . . , 0,g0, . . . , gd1,1]tde-
ﬁne a basis of Ker(Ha
0,n,n+1). Therefore, Y= (p0, . . . , pn) veriﬁes Ha
0,n,n+1 Y= 0 if and only if the
polynomial P(x) = Pn
i=0 pixiis a multiple of Pa(x).
If we consider m= 2n1 and b
S=Pm
i=0 amixi, by applying Equation (2) we obtain:
R, U K[x] such that deg(R)< n, deg(P)nand P(x)S(x) + U(x)x2n=R(x).(3)
Hence, it turns out that ﬁnding the minimal polynomial of ais equivalent to solving (3) for the
minimum degree of P. Moreover, it’s well known that
4
the extended Euclidean Algorithm, with x2nand b
S, provides an equality as (3) when the ﬁrst
remainder of degree smaller than < n is reached. Let denote such remainder by Rk,
if we consider other polynomials P0(x), U0(x) and R0(x) such that P0(x)b
S(x)+ U0(x)x2n=R0(x)
and deg(R0)<deg(Rk1), then deg(P0)deg(P) and deg(U0)deg(U).
That proves that our modiﬁcation of Berlekamp-Massey Algorithm is right.
3 Lazy Evaluation
Our modiﬁed Berlekamp-Massey Algorithm admits a lazy evaluation, which may be very useful in
solving the following problem.
Let f(x)K[x] be a squarefree polynomial of degree n. Let Bbe the universal decomposition
algebra of f(x), let Abe a quotient algebra of Band aA. Thus, Ais a zero–dimensional algebra
given by
A'K[X1, . . . , Xn]/hf1, . . . , fni,
where f1, . . . , fndeﬁne a Gr¨obner basis. Our aim is to compute the minimal polynomial of a, or
at least, one of its factors. However, the dimension of A, denoted by m, over Kas vector space is
normally too big to manipulate matrices of order m. Therefore, we apply the idea of Wiedemann’s
Algorithm, by computing the coeﬃcients of a linearly recurrent sequence, at=φ(xt), where φis a
linear form over A. Moreover, since the computation of xtis usually very expensive and the minimal
polynomial is likely to have degree smaller than the dimension, we are interested in computing the
smallest possible number of coeﬃcients in order to get the wanted polynomial.
Hence, we ﬁrst choose l < m. We start Algorithm 2 with land [φ(x0), . . . , φ(x2l1)] as input,
obtaining a polynomial as a result. Now, we test if such a polynomial is the minimal one. If this is not
the case, we choose again another l0,l < l0m, and we repeat the process with 2l0coeﬃcients. How-
ever, in this next step, it is possible to take advantages of all the quotients computed before (with the
exception of the last one), such that Euclidean Algorithm starts at R0=U0x2l0+V0
2l01
P
i=0
(φ(x2l01i)xi)
and R1=U1x2l0+V1
2l01
P
i=0
(φ(x2l01i)xi), where U0,V0,U1and V1are Bezout coeﬃcients computed
in the previous step. Manifestly, repeating this argument again and again, we obtain the minimal
polynomial.
The following pseudocode tries to facilitate the understanding of our lazy version of Berlekamp-
Massey Algorithm.
Obviously, the choice of lis not unique. Here we have started at l=m/4, adding two coeﬃcients
in every further step. In practice, the particular characteristics of the given problem could help to
choose a proper land the method of increasing it through the algorithm. Of course, the simpliﬁcation
of the Euclidean Algorithm in [3] must be considered to optimize the procedure.
5
Algorithm 3 The lazy Berlekamp-Massey Algorithm (in some particular context)
Input: mN,CKn,G: Gr¨obner basis, aA. The minimal polynomial has degree bound m.
Output : The minimal polynomial Pof a
Start
Local variables : l, i: integers, R, R1, R0, R1, V, V1, V0, V1, U, U1, U0, U1, S0, S1, Q : polynomials in
x,L, W :lists, validez;
# initialization
l=bm/4c;
L:= [1, a];W:= [1,Value(a, C)];
S0:= x2l;S1=W[1] x2l1+W[2] x2l2;
# loop
for ifrom 3to 2ldo
L[i] := normalf(L[i1]a, G); V[i] := Value(L[i], C); S1=S1+V[i]x2li;
end for
R0:= S0;R1:= S1;V0= 0 ;V1= 1 ;U0= 1 ;V1= 0;
# loop
while ldeg(R1)do
(Q, R) := quotient and remainder of R0divided by R1;
V:= V0QV1;U:= U0QU1;U1:= U0;V1:= V0;
V0:= V1;V1:= V;U0:= U1;U1:= U;R0:= R1;R1:= R;
end while
validez:=Subs(x=a, V1);
# loop
while validez 6= 0 do
l:= l+ 1;
# loop
for ifrom 2l1to 2ldo
L[i] := normalf(L[i1]a, G);
W[i] := Value(L[i], C);
end for
S0=x2S0;S1=x2S1+W[2l1]x+W[2l];
R0:= U1S0+V1S1;R1:= U0S0+V0S1;
U1:= U0;V1:= V0;U0:= U1;V0:= V1;
# loop
while ldeg(R1)do
(Q, R) := quotient and remainder of R0divided by R1;
V:= V0QV1;U:= U0QU1;U1:= U0;V1:= V0;
V0:= V1;V1:= V;U0:= U1;U1:= U;R0:= R1;R1:= R;
end while
validez:=Subs(x=a, V1)
end while # exit
End.
References
[1] E.R. Berlekamp, Algebraic Coding Theory, McGraw-Hill, New York, ch. 7 (1968).
[2] U. Cheng, On the continued fraction and Berlekamp’s algorithm, IEEE Trans. Inform. Theory,
vol. IT-30, 541–44 (1984).
6
[3] J.L. Dornstetter, On the equivalence Between Berlekamp’s and Euclid’s Algorithm, IEEE Trans.
Inform. Theory, vol. IT-33, no 3,428–431 (1987).
[4] E. Jonckheere and C. Ma, A simple Hankel Interpretation of the Berlekamp–Massey Algorith,
Linear Algebra and its Applications 125, 65–76 (1989).
[5] J.L. Massey, Shift register synthesis and BCH decoding, IEEE Trans. Inform. Theory, vol. IT-15,
122–127 (1969).
[6] W.H. Mills, Continued Fractions and Linear Recurrences, Math. Comput. 29, 173–180 (1975).
[7] V. Pan, New Techniques for the Computation of linear recurrence coeﬃcients, Finite Fields and
Their Applications 6, 93–118 (2000).
[8] V. Shoup, A Computational Introduction to Number Theory and Algebra, Cambridge University
Press (2005).
[9] Y. Sugiyama et al. A method for solving key equation for decoding Goppa codes, Infor. Contr.
vol 27, 87–99 (1975).
[10] L.R. Welch and R.A. Scholtx, Continued fractions and Berlekamp’s algorithm, IEEE Trans.
Inform. Theory, vol. IT-25, 18–27 (1979).
7
... Linear feedback shift registers (LFSR) are pseudo-random number generators vastly used for light-weight cryptographic applications. According to the Berlekamp-Massey [59], an adversary must observe at least 2n consecutive outputs from an LFSR to reconstruct the seed (explicit external secrecy of ReTrustFSM). However, in the case of ReTrustFSM, as explained in Section III-D, circuit elements dedicated to the state encoding, LFSR, and counter are protected by the scan obfuscation method to immobilize any chance of leakage from the LFSR. ...
Article
Full-text available
Hardware obfuscating is a proactive design-for-trust technique against IC supply chain threats, i.e., IP piracy and overproduction. Many studies have evaluated numerous techniques for obfuscation purposes. Nevertheless, de-obfuscation attacks have demonstrated their insufficiency. This paper proposes a register-transfer (RT) level finite-state-machine (FSM) obfuscation technique called ReTrustFSM that allows designers to obfuscate at the earliest possible stage. ReTrustFSM combines three types of secrecy: explicit external secrecy via an external key, implicit external secrecy based on specific clock cycles, and internal secrecy through a concealed FSM transition function. So, the robustness of ReTrustFSM relies on the external key, the external primary input patterns, and the cycle accuracy of applying such external stimuli. Additionally, ReTrustFSM defines a cohesive relationship between the features of Boolean problems and the required time for de-obfuscation, ensuring a maximum execution time for oracle-guided de-obfuscation attacks. Various attacks are employed to test ReTrustFSM’s robustness, including structural and machine learning attacks, functional I/O queries (BMC), and FSM attacks. We have also analyzed the corruptibility and overhead of design-under-obfuscation. Our experimental results demonstrate the robustness of ReTrustFSM at acceptable overhead/corruption while resisting such threat models.
... The presence of a natural language can be seen as the weak link of a stream or block cipher. While it may be difficult to determine the text of an encrypted message, given the natural language of a base encryption, a cryptographer can use word frequency algorithms, such as the Berlekamp-Massey Algorithm [27], to exploit one weakness in the decryption process. In that respect, the use of NLP can be seen as a weakness in stream ciphers; however, NLP can also be used for heightened security. ...
Preprint
Full-text available
This work provides a survey of several networking cipher algorithms and proposes a method for integrating natural language processing (NLP) as a protective agent for them. Two main proposals are covered for the use of NLP in networking. First, NLP is considered as the weakest link in a networking encryption model; and, second, as a hefty deterrent when combined as an extra layer over what could be considered a strong type of encryption -- the stream cipher. This paper summarizes how languages can be integrated into symmetric encryption as a way to assist in the encryption of vulnerable streams that may be found under attack due to the natural frequency distribution of letters or words in a local language stream.
... The presence of a natural language can be seen as the weak link of a stream or block cipher. While it may be difficult to determine the text of an encrypted message, given the natural language of a base encryption, a cryptographer can use word frequency algorithms, such as the Berlekamp-Massey Algorithm [27], to exploit one weakness in the decryption process. In that respect, the use of NLP can be seen as a weakness in stream ciphers; however, NLP can also be used for heightened security. ...
Conference Paper
Full-text available
This work provides a survey of several networking cipher algorithms and proposes a method for integrating natural language processing (NLP) as a protective agent for them. Two main proposals are covered for the use of NLP in networking. First, NLP is considered as the weakest link in a networking encryption model; and, second, as a hefty deterrent when combined as an extra layer over what could be considered a strong type of encryption-the stream cipher. This paper summarizes how languages can be integrated into symmetric encryption as a way to assist in the encryption of vulnerable streams that may be found under attack due to the natural frequency distribution of letters or words in a local language stream.
... In Magma (see [3]), it is done with the function MinimalPolynomial. On the other hand, an efficient algorithm based on the Berlekamp Massey Algorithm can be found in [2] and [10]. It is also possible to compute it via Gröbner Basis. ...
Article
Full-text available
Given a separable polynomial f (T) of degree n over a field K, the purpose of this talk is to present algorithms for computing in the splitting field in an exact way but with the minimum effort, that is, without obtaining the splitting field before. This idea is based on the dynamic evaluation method (see [4]). We first construct the splitting algebra associated to f (T), denoted by A K,f , where f (T) totally splits. Recall that the splitting algebra is defined by the quotient ring K[X 1 , . . . , X n /J where J is the ideal generated by the symmetric functions on the roots of f (T). It is well known that a splitting field is given by an ideal generated by a maximal idempotent of A K,f . Nevertheless it is not possible to compute such an idempotent in the general situation. Therefore we consider the splitting algebra as our first dynamic splitting field, denoted by C d . If when calculating, we find an element z ∈ C d indicating that C d is not really a field, that is, an element z which verifies at least one of these properties i) z is a zero divisor (T divides Min z (T)). ii) degree(Min z (T)) < degree(Rv(T)), iii) Min z (T) = R 1 R 2 , with deg(R 1) ≥ 1 and deg(R 2) ≥ 1, we apply our algorithms to calculate a new dynamic splitting field where z will behave in a correct way. This new dynamic field is a better approximation to a representation of the splitting field of f (T). Furthermore joint with the dynamic splitting field, we also compute a dynamic Galois group which is a better approximation to the Galois group of f (T). Thus, in this new C d we go on computing and proceed in the same way such that we only define a new dynamic field if it is necessary. These new dynamic fields are quotient rings defined by Galois ideals whose sta-bilizers define our dynamic Galois groups. One of the most important properties of Galois ideals is that their Gröbner basis are triangular. This property independently appears in both [1] and [7]. A generalization of this property appears in [5]. Observe that in our work it is crucial the computing of minimal polynomials. In Magma (see [3]), it is done with the function MinimalPolynomial. On the other hand, an efficient algorithm based on the Berlekamp Massey Algorithm can be found in [2] and [10]. It is also possible to compute it via Gröbner Basis. Let T be a new variable. Given z ∈ C d and the Galois ideal which defines C d , denoted by b, the Gröbner basis of the elimination ideal (b + − z ∩ K[T ] returns the minimal polynomial of z. However, we can get more information about C d from the Gröbner basis of b + − z Let Gb =GroebnerBasis(b + − z with T < X n < · · · < X 1 . If Gb is not triangular then C d is not a field. Suppose that P (T, X n , . . . , X i) is a polynomial in Gb such that its leading coefficient with respect to the variable X i is another polynomial in T, X n , . . . , X i+1 . Then this polynomial, the leading coefficient, is a zero divisor of C d and that allows us to obtain a new dynamic field where z behaves as in a field. In the talk, we will illustrate these ideas with some examples.
Conference Paper
Given a system of polynomial equations with parameters, we present a new algorithm for computing its Dixon resultant R. Our algorithm interpolates the monic square-free factors of R one at a time from monic univariate polynomial images of R using sparse rational function interpolation. In this work, we use a modified version of the sparse multivariate rational function interpolation algorithm of Cuyt and Lee.We have implemented our new Dixon resultant algorithm in Maple with some subroutines coded in C for efficiency. We present timing results comparing our new Dixon resultant algorithm with Zippel’s algorithm for interpolating R and a Maple implementation of the Gentleman & Johnson minor expansion algorithm for computing R.KeywordsDixon resultantParametric polynomial systemsResultantSparse rational function interpolationKronecker substitution
Article
LoRa, as a representative of Low Power Wide Area Network (LPWAN) technology, has attracted significant attention from both academia and industry. However, the current understanding of LoRa is far from complete, and implementations have a large performance gap in SNR and packet reception rate. This paper presents a comprehensive understanding of LoRa PHY (physical layer protocol) and reveals the fundamental reasons for the performance gap. We present the first full-stack LoRa PHY implementation with a provable performance guarantee. We enhance the demodulation to work under extremely low SNR (− 20 dB) and analytically validate the performance, where many existing works require SNR > 0. We derive the order and parameters of decoding operations, including dewhitening, error correction, deinterleaving, etc., by leveraging LoRa features and packet manipulation. We implement a complete real-time LoRa on the GNU Radio platform and conduct extensive experiments. Our method can achieve (1) a 100% decoding success rate while existing methods can support at most 66.7%, (2) -142 dBm sensitivity, which is the limiting sensitivity of the commodity LoRa, and (3) a 3600 m communication range in the urban area, even better than commodity LoRa under the same setting.
Article
We present a parallel GCD algorithm for sparse multivariate polynomials with integer coefficients. The algorithm combines a Kronecker substitution with a Ben-Or/Tiwari sparse interpolation modulo a smooth prime to determine the support of the GCD. We have implemented our algorithm in C for primes of various size and have parallelized it using Cilk C. We compare our implementation with Maple and Magma's serial implementations of Zippel's GCD algorithm.
Article
Full-text available
Block ciphers form one of the main classes of cryptographic algorithms. One of the challenges in development of block ciphers, like any other cryptographic algorithms, is the analysis of their cryptographic security. In the course of such analysis, statistical testing of block ciphers is often used. The paper reviews literature on statistical testing of block ciphers. The first section of the paper briefly and informally discusses approaches to the definition of the concept of a random sequence, including the Kolmogorov, von Mises, and Martin-Löf approaches and the unpredictability-related approach. However, all these approaches to the definition of randomness are not directly applicable in practice. The second section describes statistical tests of binary sequences. It provides brief descriptions of the tests included in the DieHard, NIST STS, RaBiGeTe statistical test suites. The third section provides the appropriate information to present further the operation modes of block ciphers. The fourth section deals with techniques for statistical testing of block ciphers. Usually such techniques lie in the fact that based on the block cipher under test, various generators of the pseudorandom sequences are built, with their output sequences being tested using any suite of statistical tests. The approaches to the construction of such generators are given. The paper describes the most known statistical test technique for block ciphers among the submitted for the AES competition. It is a technique the NIST uses for statistical testing of ciphers. In addition, there are other techniques mentioned in the literature. In conclusion the paper states that there is a need to develop new techniques for statistical testing of block ciphers. The paper support was provided from the Russian Foundation for Basic Research in the framework of the research project No. 16-07-00542 supported
Conference Paper
We present a parallel GCD algorithm for sparse multivariate polynomials with integer coefficients. The algorithm combines a Kronecker substitution with a Ben-Or/Tiwari sparse interpolation modulo a smooth prime to determine the support of the GCD. We have implemented our algorithm in Cilk C. We compare it with Maple and Magma's implementations of Zippel's GCD algorithm.
Article
Erasure codes, such as LT and Raptor codes, are designed for the purpose of erasure-resilient distribution of data over computer networks. To achieve a small reception overhead, however, LT and Raptor codes must be used with a large design length k, making these codes unsuitable for real-time applications. In this paper, we propose a new class of erasure codes based on Reed-Solomon codes that unlike other Reed-Solomonbased erasure codes are rateless and also, unlike other rateless codes, guarantee zero overhead even for small k. Moreover, they have a reasonable computational complexity of coding when k is not too large. In fact, a practical implementation of subfield subcodes of Reed-Solomon codes with arbitrarily large block lengths is presented.
Article
Let t 0 , t 1 , t 2 , ⋯ {t_0},{t_1},{t_2}, \cdots be a sequence of elements of a field F . We give a continued fraction algorithm for t 0 x + t 1 x 2 + t 2 x 3 + ⋯ {t_0}x + {t_1}{x^2} + {t_2}{x^3} + \cdots . If our sequence satisfies a linear recurrence, then the continued fraction algorithm is finite and produces this recurrence. More generally the algorithm produces a nontrivial solution of the system $∑ j = 0 s t i + j λ j , 0 ⩽ i ⩽ s − 1 , \sum \limits _{j = 0}^s {{t_{i + j}}{\lambda _j},\quad 0 \leqslant i \leqslant s - 1,}$ for every positive integer s .
Article
Number theory and algebra play an increasingly significant role in computing and communications, as evidenced by the striking applications of these subjects to such fields as cryptography and coding theory. This introductory book emphasises algorithms and applications, such as cryptography and error correcting codes, and is accessible to a broad audience. The mathematical prerequisites are minimal: nothing beyond material in a typical undergraduate course in calculus is presumed, other than some experience in doing proofs - everything else is developed from scratch. Thus the book can serve several purposes. It can be used as a reference and for self-study by readers who want to learn the mathematical foundations of modern cryptography. It is also ideal as a textbook for introductory courses in number theory and algebra, especially those geared towards computer science students.
Article
A simple interpretation of the Berlekamp-Massey algorithm in the light of the Hankel matrix is presented. The salient result is that the jump of the linear feedback shift register (LFSR) length is derived almost trivially from the so-called Iohvidov index of the Hankel matrix, prior to making any reference to the Berlekamp-Massey algorithm itself. Next, the Hankel system of equations that yields the updated connection polynomial is solved via the natural LU factorization of the Hankel matrix, which itself leads to the Berlekamp-Massey algorithm in a simple and transparent manner.
Article
The n coefficients of a fixed linear recurrence can be expressed through its m≤2n terms or, equivalently, the coefficients of a polynomial of a degree n can be expressed via the power sums of its zeros—by means of a polynomial equation known as the key equation for decoding the BCH error-correcting codes. The same problem arises in sparse multivariate polynomial interpolation and in various fundamental computations with sparse matrices in finite fields. Berlekamp's algorithm of 1968 solves the key equation by using order of n2 operations in a fixed field. Several algorithms of 1975–1980 rely on the extended Euclidean algorithm and computing Padé approximation, which yields a solution in O(n(log n)2 log log n) operations, though a considerable overhead constant is hidden in the “O” notation. We show algorithms (depending on the characteristic c of the ground field of the allowed constants) that simplify the solution and lead to further improvements of the latter bound, by factors ranging from order of log n, for c=0 and c>n (in which case the overhead constant drops dramatically), to order of min (c, log n), for 2≤c≤n; the algorithms use Las Vegas type randomization in the case of 2<c≤n.
Article
In this paper we show that the key equation for decoding Goppa codes can be solved using Euclid's algorithm. The division for computing the greatest common divisor of the Goppa polynomial g(z) of degree 2t and the syndrome polynomial is stopped when the degree of the remainder polynomial is less than or equal to t − 1. The error locator polynomial is proved the multiplier polynomial for the syndrome polynomial multiplied by an appropriate scalar factor. The error evaluator polynomial is proved the remainder polynomial multiplied by an appropriate scalar factor. We will show that the Euclid's algorithm can be modified to eliminate multiplicative inversion, and we will evaluate the complexity of the inversionless algorithm by the number of memories and the number of multiplications of elements in GF(qm). The complexity of the method for solving the key equation for decoding Goppa codes is a few times as much as that of the Berlekamp—Massey algorithm for BCH codes modified by Burton. However the method is straightforward and can be applied for solving the key equation for any Goppa polynomial.
Article
It is shown that Berlekamp's iterative algorithm can be derived from a normalized version of Euclid's extended algorithm. Simple proofs of the results given recently by Cheng are also presented.
Article
Continued fraction techniques are equivalent to Berlekamp's algorithm. The sequence D(k), k geq 0 , in Berlekamp's algorithm provides the information about when Berlekamp's algorithm completes one iterative step of the continued fraction. In fact, this happens when D(K) < k + 1/2 ; and when D(k) neq D(k + 1) , it implies that Berlekamp's algorithm begins the next iterative step of the continued fraction.
Article
Theorems are presented concerning the optimality of rational approximations using non-Archimedean norms. The algorithm for developing the rational approximations is based on continued fraction techniques and is virtually equivalent to an algorithm employed by Berlekamp for decoding BCH codes. Several variations of the continued fraction technique and Berlekamp's algorithm are illustrated on a common example.
Article
It is shown in this paper that the iterative algorithm introduced by Berlekamp for decoding BCH codes actually provides a general solution to the problem of synthesizing the shortest linear feedback shift register capable of generating a prescribed finite sequence of digits. The shift-register approach leads to a simple proof of the validity of the algorithm as well as providing additional insight into its properties. The equivalence of the decoding problem for BCH codes to a shift-register synthesis problem is demonstrated, and other applications for the algorithm are suggested.