Conference PaperPDF Available

Metropolis Algorithm for solving Shortest Lattice Vector Problem (SVP)

Abstract

In this paper we study the suitability of the Metropolis Algorithm and its generalization for solving the shortest lattice vector problem (SVP). SVP has numerous applications spanning from robotics to computational number theory, viz., polynomial factorization. At the same time, SVP is a notoriously hard problem. Not only it is NP-hard, there is not even any polynomial approximation known for the problem that runs in polynomial time. What one normally uses is the LLL algorithm which, although a polynomial time algorithm, may give solutions which are an exponential factor away from the optimum. In this paper, we have defined an appropriate search space for the problem which we use for implementation of the Metropolis algorithm. We have defined a suitable neighbourhood structure which makes the diameter of the space polynomially bounded, and we ensure that each search point has only polynomially many neighbours. We can use this search space formulation for some other classes of evolutionary algorithms, e.g., for genetic and go-with-the-winner algorithms. We have implemented the Metropolis algorithm and Hasting's generalization of Metropolis algorithm for the SVP. Our results are quite encouraging in all instances when compared with LLL algorithm.
Metropolis Algorithm for solving Shortest Lattice
Vector Problem(SVP)
Ajitha Shenoy K B
Department of Computer
Science and Engineering
Indian Institute of Technology
Kanpur, INDIA
ajith@cse.iitk.ac.in
Somenath Biswas
Department of Computer
Science and Engineering
Indian Institute of Technology
Kanpur, INDIA
sb@cse.iitk.ac.in
Piyush P Kurur
Department of Computer
Science and Engineering
Indian Institute of Technology
Kanpur, INDIA
ppk@cse.iitk.ac.in
Abstract—In this paper we study the suitability of the Metropo-
lis Algorithm and its generalization for solving the shortest
lattice vector(SVP) problem. SVP has numerous applications
spanning from robotics to computational number theory, viz.,
polynomial factorization. At the same time, SVP is a notoriously
hard problem. Not only it is NP-hard, there is not even any
polynomial approximation known for the problem that runs in
polynomial time. What one normally uses is the LLL algorithm
which, although a polynomial time algorithm, may give solutions
which are an exponential factor away from the optimum. In
this paper, we have defined an appropriate search space for
the problem which we use for implementation of the Metropolis
algorithm. We have defined a suitable neighbourhood structure
which makes the diameter of the space polynomially bounded,
and we ensure that each search point has only polynomially
many neighbours. We can use this search space formulation for
some other classes of evolutionary algorithms, e.g., for genetic
and go-with-the-winner algorithms. We have implemented the
Metropolis algorithm and Hasting’s generalization of Metropolis
algorithm for the SVP. Our results are quite encouraging in all
instances when compared with LLL algorithm.
Index Terms—SVP, Search Space, Metropolis Algorithm, Hast-
ing’s Generalization, LLL.
I. INTRODUCTION
We investigate in this paper the suitability of using the
Metropolis algorithm [1][2] to solve the shortest lattice vector
problem (SVP, for short). The Metropolis algorithm is a
widely used randomized search heuristic and is often used in
practice to solve optimization problems. It is known that the
algorithm performs surprisingly well even for some provably
hard problems; e.g, [2] showed that the Metropolis algorithm is
efficient for random instances of the graph bisection problem.
It is, therefore, of interest to investigate the performance of
the algorithm for SVP, which is another hard problem of great
interest, both from the theoretical and practice considerations.
Van Emde Boas [3] proved in 1981 that SVP is NP-hard
for the norm and mentioned that the same should be true
for any pnorm. However, proving NP-hardness in the 2norm
(or in any finite pnorm ) was an open problem for a long
time. A breakthrough result by Ajtai[4] in 1998 finally showed
that SVP is NP-hard under randomized reductions. Another
breakthrough by Micciancio [5]in 2001 showed that SVP is
hard to approximate within some constant factor, specifically
any factor less than 2. This was the best result known so far
leaving a huge gap between the 2hardness factor and the
exponential approximation factors achieved by Lenstra et. al.
[6] in 1982, Schnorr [7] in 1988, and Ajtai et. al. [8] in 2003.
At the same time, there are many situations in practice which
require us to get at least a good solution for SVP, because
this is an essential step in most algorithms for factorizing
polynomials.
The structure of the paper is as follows: in the following
section, we define SVP, in Section 3 we define an appropriate
search space for our approach to solve SVP, in Section 4
we show how we use our search space to implement the
Metropolis and Hasting’s generalization of the Metropolis
algorithm, we also mention why the latter is more appropriate
for our search space. Section 5 provides some experimental
data on how our implementation compares with a standard
implementation of the LLL algorithm. The paper ends with
some concluding remarks.
II. SHORTEST VEC TOR PROB LE M (SVP)
Definition 2.1 (Lattices): Let B={b1,b2,...,bn}be a
set of linearly independent vectors in m-dimensional Eu-
clidean space Rmwhere mn. The set L(B)of all vectors
a1b1+. . . +anbn,ai’s varying over integers, is called the
integer lattice, or simply, the lattice with basis B(or generated
by B) and nis the dimension of L(B). If m=n, we say
that the lattice is of full dimension.
In this paper, we consider only full dimensional lattices.
Furthermore, we consider only lattices whose basis vectors
have rational components. In such a case, we can clear the
denominators and assume that the each of the basis element
is a vector in Zinstead of R.
The basis can be compactly represented as an n×nmatrix
(also denoted as B) with columns being the basis vectors
b1,...,bnas its columns. Then we can write L(B) =
{Ba:aZn}
Problem 2.2 (Shortest Lattice Vector Problem (SVP)):
Given a lattice L(B)contained in Znspecified by linearly
independent vectors b1,...,bn, the SVP problem is to find
a shortest (in Euclidean norm) non-zero vector of L(B).
Without loss of generality, we consider the decision (and the
search) version of the above problem: Given a a basis Bas
above, and a rational number K > 0, the problem is to decide
if there is non-zero vector vthat belongs to L(B)such that
||v|| < K where ||v|| denotes the Euclidean norm of v, and
if the answer is ’yes’, output such a vector.
Clearly, SVP can be solved by logarithmically many appli-
cations of the decision version of the problem.
III. SEARCH SPACE F OR SVP
The working of the Metropolis algorithm on an instance
can be viewed as a random walk with a bias on a finite
neighbourhood structure of states. Each state of the structure
represents a feasible solution of the optimization problem
instance being solved, and the structure has a goal state, the
optimum point, which the algorithm intends to locate. For
minimization problems, as our problem, SVP, is, each point
in the structure has a cost, and the goal state is the state with
minimum cost (or, for decision versions, the state with cost less
than or equal to a given specified cost). At any given step, the
algorithm is at one of the points in the search space, it selects
one of the neighbouring points and then transits to that point.
The point is selected probabilistically, the bias ensures that the
algorithm would reach the goal state eventually without getting
stuck at local minima. For the Metropolis algorithm to run
efficiently, it is necessary that the neighbourhood structure for
an instance to satisfy (a) there should be at most exponentially
(in instance size) many elements in the structure, (b) the
diameter of the structure should be bounded above by a fixed
polynomial in the instance size, (c) the set of neighbours of
any element should be computable in time polynomial in the
instance size, and similarly, the cost of any state also should
be computable efficiently.
For justifying the way we define our search space for the
SVP, we need the following result.
Proposition 3.1: Let Bbe an n×nnon-singular matrix and
let ube a n×1vector, kuk≤ k,kbeing a non-zero constant.
If there is an integer vector wsuch that Bw=u, then the
magnitude of every component of wis bounded above by M,
M= (αn)n, where αdenotes the largest value amongst the
magnitudes of all the elements of Band ktaken together.
The proof follows easily from the Cramer’s rule, noting that
first, the determinant of Bis 6= 0, and second, if βis the
largest magnitude of all the entries of an n×nmatrix Y, then
det Y()n
Definition 3.2: [Search Space for SVP] Let B, an n×n
matrix, be the basis of a lattice Land kbe a given constant.
Our goal is to look for a lattice vector of norm kor less. The
search space for this instance of the SVP is as follows, where
Mis as in Proposition 3.1, and mis a parameter as fixed in
the implementation. (mcan be a fixed as a constant for all
instances, or, more usually, it will be a fixed multiple of n.)
1) [Search space elements] The search space elements
consist of matrices of the form A0= [A|I], where I
is the n×nidentity matrix and Ais an n×mmatrix
with all entries of magnitude bounded above by M,M
as in Proposition 3.1.
2) [Definition of neighbourhood] For two elements R0and
S0, the latter is a neighbour of the former if Scan be
obtained from Rby any of the following elementary
operations:
a) By swapping two columns of R,
b) By multiplying a column of Rby 1,
c) By adding a power of 2multiple of one column of
R0to another column of R, provided the resultant
column satisfies that the magnitude of each of its
components is less than equal to M. In particular,
rir0
i±c×r0
j,(i6=j,1im, 1jm+n,
and c, a positive integer, where c= 20,...,2k,
k=n·log (αn). (For a matrix R, ridenotes its
ith column.)(As stated already, this operation is
allowed only if the component magnitude condition
is satisfied.)
3) [Cost associated with a search space element] For an
element A0= [A|I]of the search space, its cost c(A0)
is defined to be t, where tis the norm of that vector
vwhich has the smallest norm amongst the (n+m)
vectors B[A|I]. (In other words, by pre-multiplying the
basis matrix Bto [A|I], we obtain an n×mmatrix; tis
the norm of the column vector with least norm amongst
these n+mcolumn vectors of the matrix B[A|I].)
The following Proposition follows easily from the the way
we have defined our search space.
Proposition 3.3: Let the n×nmatrix Bbe a basis of the
lattice L, and kis a given constant for the SVP instance.
1) For every element [A|I]of the search space, each of
the (n+m)(column) vectors of the matrix B[A|I]is a
vector of the lattice L. Also, the (m+n)column vectors
of B[A|I]generate the lattice L.
2) If Lcontains a vector of norm kor less, then the search
space for SVP with L, k, will contain an element [A|I]
such that one of the (m+n)column vectors of B[A|I]
will be of norm kor less. (The search space uses Mas
defined in Proposition 3.1.)
The first part is obvious, as every column vector of B[A|I]
is an integer linear combination of the nlattice vectors. Also,
as the vectors of Bare contained in the vectors of B[A|I],
therefore, Band B[A|I]both generate the same lattice (Here,
we use the assumption that Lis full dimensional). The second
part of the Proposition also follows easily: we know from
Proposition 3.1 that if there is a lattice vector of norm kor
less, then there is a v, the magnitude of each component of
vbounded by M, such that the norm of Bvis kor less. Our
search space will have a member [A|I]with Acontaining v,
as the latter can be obtained from the elementary operations
we allow from the identity matrix I.
We now show that our search space definition satisfies the
requirements we stated at the start of the Section. First we
prove that every search space element has at most polynomi-
ally many neighbours.
Theorem 3.4: Let n, m be as in Definition 3.2 and Mbe
as in Proposition 3.1. The number of neighbours for any
node A0in the search space is O(m2log M). (As log Mis
nlog (αn), and log αbeing the number of bits required to
specify the largest magnitude number in the problem instance,
we therefore have that every element has at most polynomially
many neighbours.)
The proof follows by noting that a search space element has
at most mC2neighbours through the first kind of elementary
operation, mthrough the second kind, and mC2×2 (k+ 1) +
2mn (k+ 1) of the third kind, where kis O(log M).
Next, our goal is to show that the search space has a
polynomially bounded diameter.
Theorem 3.5: There is an O(mn log M)-length path be-
tween the elements A0= [A|I]and B0= [B|I](and vice
versa), where A0and B0are any two elements in the search
space.
Proof: Let us first show how we can replace the ith
column aiof Awith bi, the ith column of B. In the first
stage, using elementary operations, we get ejin place of ai,
where ejdenote jth column of the Identity matrix I(Since
Ahas mncolumns and mis multiple of n,i=qn +j
for some qZ, where 1jn,j=nif i=qn). We
have to set jth component of aito 1 and other component
of aito 0. Suppose that the rth component of aiwas x.
Let x=c0x0+. . . +ckxk, where each ciis 2iand each
xiis 0or 1, and kis O(log M). For r6=jwe set the rth
component to zero by performing the elementary operations
aiaixerin atmost k+1 elementary operation. For r=j
we set the component to one by performing the elementary
operation aiai(x1)ejin at most (k+ 1) elementary
operations, since x1 = y= 20y0+. . .+ 2kyk, where each yt
is 0or 1,0tk. So total number of elementary operations
to set each component of aiis bounded by n(k+1) elementary
operation. Therefore the total number of elementary operations
to set aifor 1imis bounded by mn(k+ 1). Now in the
second stage, using elementary operations, we get biin place
of ai=ej. Let rth component of bibe z= 20z0+. . . +2kzk,
where each ztis 0or 1,0tk. If r=jby performing
the elementary operation aiai+ (z1)ej, we can set
jth component of aito jth component of biin atmost k+ 1
elementary operations. For r6=jwe set the rth component of
aito rth component of biby performing elementary operation
aiai+zer. This implies that we can set aito biin atmost
n(k+ 1) elementary operations. Hence we can set aito bifor
all 1imin atmost nm(k+ 1) elementary operations.
Therefore we can set Ato Bin atmost 2nm(k+1) elementary
operations. i.e. O(mn log M). Hence the proof.
We have proved that the entries of Ais bounded by O(αnn)
and Theorem 3.5 suggest that there exists a path using which
we can reach any node in the search space. Hence our
search space Definition 3.2 ensures that entries of intermediate
matrices will not grow exponentially and we can also reach
from one state to another with in polynomial number of steps.
We are always interested in finding shortest non zero vector in
the lattice but there are chances that we may get zero vector
while applying the elementary operations defined in Definition
3.2 on the matrix A0= [A|I]. To avoid this, we can define a
cost of zero vector as infinity which prevents from moving to
such neighbours due to its very high cost. Let us now define
the Metropolis algorithm for SVP.
IV. MET ROPOLIS ALGORITHM
The pseudo-code of the Metropolis algorithm is given below
(Algorithm 1). As mentioned before, the metropolis algorithm
is the execution of a Markov process. It is therefore completely
defined once the transition probabilities are defined.
Consider a search space and neighbourhood structures as
defined in Definition 3.2. Observe that only one row of
current solution will be changed by the elementary operations
performed on it. The cost function can be modified as follows:
Let R0be the current solution and S0be the new solution.
S0is obtained from R0by applying one of the elementary
transformation as defined in the Definition 3.2 to the rth
column of R. Hence, the cost function c(R0)is the Euclidian
norm of the rth column vector of BRand c(S0)is the
Eulidian norm of the rth column vector of BS.
The Metropolis algorithm on instance R0= [R|I]runs a
Markov chain XR0
= (XR0
1, XR0
2, . . .), using the temperature
parameter T. The state space of the chain is the set SR0
of the
feasible solutions of R0. Let ddenote the degree of the node
in a search graph where d=O(m2·log M)as in Theorem
3.4. Let R0and S0denote any two feasible solutions and
neighbourhood of R0is denoted by N(R0). Then the transition
probabilities are as follows:
qR0S0=
0if R06=S0and S0/N(R0)
e
c(S
0
)c(R
0
)/T
dif c(S0)> c(R0)and R0N(R0)
1
dif c(R0)c(S0)and S0N(R0)
1PJ06=R0qJ0R0if R0=S0
The complete algorithm (Algorithm 1) is give below.
HAS TI NG SGENERALIZATION OF METROP OL IS
ALGORITHM
The way the Metropolis algorithm decides about moving
from the current state sito a state in the neighbourhood can
be seen as a two stage process: first, choose a neighbour sj
uniformly at random (the proposal stage), and then, with a
probability αwhich depends upon the relative costs of the
solutions associated with sjand si, move to sjor remain at
si(the acceptance stage).
In our case, the neighbours of a state [A|I]are [A0|I]’s
where A0is obtained by performing an elementary operation
using the vectors in Aand I. Some of the elementary op-
erations represent what we call long jumps because a vector
vis replaced by another uwhere there is a large difference
in the norms of vand u. This happens when vis replaced
by v±cwwhen the constant cis large. It is desirable
to have a control on how extensively our algorithm will
make use of such long jumps. This is not possible in the
Algorithm 1 Metropolis Algorithm
1: Input : BBasis for the lattice Land a rational num-
ber K
2: Output : Matrix R0such that BR0contains a vector v
with ||v|| ≤ K.
3: Let In×nIdentity matrix. Let R0= [R|I]be the
starting state in the search space as in Definition 3.2 and
c(R0)denote cost of R0as defined in the beginning of
this section.
4: Set BestN orm =c(R0)
5: while BestN orm > K do
6: Select any one of the neighbour S0of R0uniformly at
random by performing one of the elementary operation
as defined in Definition 3.2
7: if BestN orm > c(S0)then
8: BestN orm =c(S0)
9: end if
10: Set R0=S0with probability
α= min ec(S0)/T
ec(R0)/T ,1!
11: end while
standard Metropolis algorithm as the proposal stage will chose
a neighbour uniformly at random.
To overcome this problem, we make use of the Hasting’s
generalization [9] of the Metropolis algorithm. In this gener-
alization, we can use any probability to select the neighbour
of a state in the proposal stage. Let qxz denote the probability
by which we select a neighbour zwhen the current state is x.
Let xbe a state. If y1, . . . , ynxbe neighbours the neighbours
of x. Then
qxz =
0if x6=zand z /N(x)
θif x=z,
riif z=yi
,
where the values rican be chosen appropriately depending
on how much we want to invest on each of the strategy.
The Hasting’s generalized metropolis algorithm M2runs on
the same state space but has a different transition probability:
Suppose the chain M2is at a state the state xat some step.
Then
1) With probability qxz,M2selects a state zin the neigh-
bourhood.
2) If z=xthen the next state of M2is x.
3) If z=yi, we first compute αdefined as
α= min ec(yi)/T ·qyix
ec(x)/T ·qxyi
,1
Here, for any state z,c(z)represents the cost of the
candidate solution of zand Tis a fixed temperature
parameter.
4) We move to yiwith probability αelse we remain in the
present state x.
It can be verified easily that the chain M2is time-reversible
and the in its stationary distribution, the probability of x,πx
is given by:
πx=ec(x)/T
Z,
where Zis the normalizing factor Piπi. The chain M2
is the Hasting’s generalization. This chain has the same
stationary distribution as the usual Metropolis algorithm, but
has the flexibility of fine tuning the probability of choosing a
neighbour to reflect the structure of the problem at hand. In
our implementation, we shall keep qxyithe same as qyix. The
detailed algorithm (Algorithm 2) is given below. In the next
section we will compare the results of our algorithm with that
of LLL algorithm.
Algorithm 2 Hasting’s Generalization
1: Input : BBasis for the lattice Land a rational num-
ber K
2: Output : Matrix R0such that BR0contains a vector v
with ||v|| ≤ K.
3: Let In×nIdentity matrix. Let R0= [R|I]be the
starting state in the search space as in Definition 3.2 and
c(R0)denote cost of R0as defined in the beginning of
this section. Let ddenote total number of neighbours as
in Theorem 3.4
4: Set BestN orm =c(R0)
5: while BestN orm > K do
6: Select any one of the neighbour S0of R0by perform-
ing one of the elementary operations defined below.
Swap two columns of Rwith probability mC2
d,
Multiply a column of Rby 1with probability m
d
Add a power of 2times a column of R0to another
column of Ri.e. in particular, rir0
i±c×r0
j,(i6=j,
1im, 1jm+n, where c= 20,...,2k,
k=n·log (αn)) with probability dmC2m
d·Pi,
where Pidenote probability of selecting the value of
c= 2iand Pk
i=0 Pi= 1.
[We can use more than one probability distribution
to select values for c. In our implementation we have
selected two probability distributions Pi=1
k+1 and
Qi= 2(k+ 1 i)/(k+ 1)(k+ 2) to select values for
c. We will keep on changing our selection probability
distribution with Piand Qifor every selected number
of steps(500 steps).]
7: if BestN orm > c(S0)then
8: BestN orm =c(S0)
9: end if
10: Set R0=S0with probability
α= min ec(S0)/T
ec(R0)/T ,1!
11: end while
TABLE I
COMPARISON OF RESULT S LLL : M ET ROPO LI S (DATA TAKE N FRO M [10])
Lattice Dim. Best Norm CPU Time Input
Type n Found in seconds size
(LLL : Our Algo) (LLL : Our Algo) in bits
Swift 8 4.242 : 4.123 0.04 : 0.004 8
NTRU 8 4.358 : 3.6055 0 : 0.004 8
Modular 10 2.449 : 2.449 0.04 : 0.024 8
Duarl 10 3.6055 : 3.6055 0.004 : 0.004 8
Modular
Random 10 2.828 : 2.645 0.004 : 0.012 8
V. RE SU LTS
In this section we describe how our algorithm compares
with the celebrated LLL [6] algorithm on benchmark instances
SVPs’. The LLL algorithm computes a vector which is at most
2n1
2of the shortest vector of the lattice and has a complexity
of On6·log3α, where αis max1in||bi||. Although it
does not compute the shortest vector, the output vector is short
enough for applications like polynomial factoring.
We now describe the benchmark lattices that we ran this
algorithm on. A class of SVP instances are generated using
the techniques developed by Richard Lindner and Michael
Schneider [10]. They have given sample bases for Modular,
Random, ntru, SWIFT and Dual Modular lattices of dimension
10. We have tested our code for all these instances and found
that our algorithm works faster and gives shorter lattice vector
when compared to LLL. The tested results are given in the
Table I
Based on the result by Ajtai [11], Johannes Buchmann,
Richard Lindner, Markus Ruckert and Michael Schneider [12]
[13] constructed a family of lattices for which finding the
short vector implies being able to solve difficult computational
problems in all lattices of a certain smaller dimension. For
completeness we give a quick description of these family.
Definition 5.1: Let nbe any positive integer greater than
50,c1,c2be any two positive real numbers such that c1>2
and c2c1ln 2 ln 2
50·ln 50 . Let m=c1·n·ln nand q=nc2.
For a matrix XZn×m, with column vectors x1, . . . , xm, let
L(c1, c2, n, X) = ((v1, . . . , vm)Zm|
m
X
i=1
vixi0mod q).
All lattices in the set L(c1, c2, n, .) = {L(c1, c2, n, X)|X
Zn×m
q}are of dimension mand the family of lattices Lis the
set of all L(c1, c2, n, .)
They also proved that all lattices in L(c1, c2, n, .)of the
family Lcontain a vector with Euclidean norm less than m
and it is hard to find such vector. The challenge is to try
different means to find a short vector. The Challenge is defined
in the following definition:
Definition 5.2 (Lattice Challenge:): Given lattice basis of
lattice Lm, together with a norm bound ν. Initially set ν=
TABLE II
COMPARISON OF RESULT S LLL : M ET ROPO LI S (DATA TAKE N FRO M
[13][12]: TOY CHAL LE NGE )
Dimension Best Norm CPU Time Input size
n Found in seconds in bits
(LLL : Our Algo) (LLL : Our Algo)
15 1234.58 : 1147.2279 0.016 : 201.651 150
20 3 : 2.8284 0.2680 : 0.052 8
25 1.73 : 1.73 0.008 : 0.004 8
30 4.123 : 3.8729 0.008 : 0.008 8
50 20.49 : 8.66 0.108 : 291.892 100
dme. The goal is to find a vector vLm, with ||v||2ν.
Each solution vto the challenge decreases νto ||v||2.
We have tested our algorithm for toy challenges(i.e. with
m50) and the comparison results with LLL is listed in
Table II.
An important step in the LLL algorithm is the computation
of the Gram-Schmidt orthogonalisation of the basis in hand.
In practical implementations, this computation is done using
floating point numbers instead of multiprecision arithmetic to
speed up computation. We apply a similar technique here. At
each step our transition probabilities are based on the value
of the objective function, which is the length of the smallest
vector in the current solution. We compute this length using
floating point arithmetic instead of the full multiprecission
arithmetic.
Our results are very encouraging. For all the examples
considered, we found that our algorithm performs well either
in value or in time and often in both than LLL. When the
number of bits used to represent integer value is more than
100 bits we found that LLL is more faster than our algorithm
but our algorithm gives shorter vector than LLL.
VI. CONCLUSION
In this paper we have considered the use of the Metropolis
algorithm, and its generalization due to Hastings, for solving
SVP, a well-known, hard combinatorial optimization problem.
To the best of our knowledge, this is the first such attempt. Our
approach rests on an appropriate definition of a search space
for the problem, which can be used for some other classes
of evolutionary algorithms as well, e.g., genetic algorithm
and the go-with-the-winner algorithm. We have compared the
performance of our implementation with that of a standard
implementation of the LLL algorithm; and the results we have
obtained are fairly encouraging. Given this experience, it is
worth while to explore if it can be shown that our approach
is efficient for random instances of SVP.
REFERENCES
[1] S. Sanyal, S. Raja, and S. Biswas, “Necessary and sufficient conditions
for success of the metropolis algorithm for optimization,” in GECCO’10,
Copyright ACM, Portland, USA, 2010.
[2] T. Carson, “Emperical and analytic approaches to understanding local
search heuristics,” in PhD Thesis, University of California, San Diego,
2001.
[3] P. V. E. Boas, “Another np-complete problem and the complexity of
computing short vector in a lattice,” in Tech. rep 8104, University of
Amsterdam, Department of Mathematics, Netherlands, 1981.
[4] M. Ajtai, “The shortest vector problem in l2is np-hard for random-
ized reductions,” in STOC 98: Proceedings of the 30th Annual ACM
Symposium on Theory of Computing, New York, NY, USA, 1998, pp.
10–19.
[5] D. Micciancio, “The shortest vector in a lattice is hard to approximate
to within some constant,” SIAM Journal of Computing, vol. 30(6), pp.
2008–2035, 2001.
[6] A. Lenstra, H. W. L. Jr., and L. Lovasz, “Factoring polynomials with
rational coefficients,Mathematische Annalen, vol. 261(4), pp. 515–534,
1982.
[7] C. Schnorr, “A more efficient algorithm for lattice basis reduction,
Journal of Algorithms, vol. 9(1), pp. 47–62, 1988.
[8] M. Ajtai, “The worst-case behavior of schnorr’s algorithm approximating
the shortest nonzero vector in a lattice,” in STOC-03: Proceedings of the
35th Annual ACM Symposium on Theory of Computing, ACM Press,
2003, pp. 396–406.
[9] M. Mitzenmacher and E. Upfal, Probability and Computing: Random-
ized Algorithms and Probabilistic Analysis. Page 269: Cambridge
University Press, 2005.
[10] S. reference v4.7, “Cryptography,” www.sagemath.org/doc/reference/
sage/crypto/lattice.html.
[11] M. Ajtai, “Generating hard instances of lattice problems (extended
abstract),” in Proceedings of the twenty-eighth annual ACM symposium
on Theory of computing, STOC’96, ACM, New York, NY, USA, 1996.
[12] J. Buchmann, R. Lindner, M. Ruckert, and M. schneider, “Explicit hard
instances of the shortest vector problem,” in PQ Crypto, 2nd Internation
Workshop on Post Quantum Cryptography, LNCS 5299, 2008, pp. 79–
94.
[13] T. Darmstadt, “Lattice challenge,” www.latticechallenge.org.
... For example, there is the Traveling Salesman problem [1], Covering Salesman problem [2], 0/1-Knapsack Problem [3], Knapsack Problem with Forfeits [4], multiple demand multiple-choice multidimensional Knapsack Problem [5], Longest path problem [6], Scheduling problem [7], Truck Scheduling [8,9], and Flexible Flowshop Scheduling [10] etc. There is no good polynomial time approximation algorithm for such problems (for example, Shortest Vector Problem [11]). Hence, a practitioner or researcher makes use of metaheuristics such as Evolutionary Algorithm (EA) [12], Metropolis Algorithm (MA) [13], Simulated annealing (SA) [14], Particle Swarm Optimization (PSO) [15], Ant Colony Optimization (ACO) [16] etc., to get near to the optimum solution for the problem. ...
... Note that MA is a widely used local search-based metaheuristic [42]. As per the literature, it is successful in finding a good solution for many optimization problems [11,[41][42][43][44][45][46][47][48][49][50]. Further details about MA are discussed in Section 4. ...
... Proof. We prove this by using the concept of canonical path [11,41]. We describe it quickly here for better readability. ...
Article
Full-text available
The structural property of the search graph plays an important role in the success of local search-based metaheuristic algorithms. Magnification is one of the structural properties of the search graph. This study builds the relationship between the magnification of a search graph and the mixing time of Markov Chain (MC) induced by the local search-based metaheuristics on that search space. The result shows that the ergodic reversible Markov chain induced by the local search-based metaheuristics is inversely proportional to magnification. This result indicates that it is desirable to use a search space with large magnification for the optimization problem in hand rather than using any search spaces. The performance of local search-based metaheuristics may be good on such search spaces since the mixing time of the underlying Markov chain is inversely proportional to the magnification of search space. Using these relations, this work shows that MC induced by the Metropolis Algorithm (MA) mixes rapidly if the search graph has a large magnification. This indicates that for any combinatorial optimization problem, the Markov chains associated with the MA mix rapidly i.e., in polynomial time if the underlying search graph has large magnification. The usefulness of the obtained results is illustrated using the 0/1-Knapsack Problem, which is a well-studied combinatorial optimization problem in the literature and is NP-Complete. Using the theoretical results obtained, this work shows that Markov Chains (MCs) associated with the local search-based metaheuristics like random walk and MA for 0/1-Knapsack Problem mixes rapidly.
... Best improvement in provable version of sieve algorithm Copyright © 2019 MECS I.J. Information Technology and Computer Science, 2019, 8,[9][10][11][12][13][14][15][16][17][18][19] achieved the time complexity of ( ) by using combinatorial algorithms [14]. As we know, the best time complexity for heuristically (not-provable) sieving is ( ) [15]. ...
... Since the optimization/search methods introduced by AI to solve the hard problems, are often known with the class of randomly-guided searches, so our discussion will be focused on this class. In this paper, some main techniques in the class of randomly-guided search is briefly analyzed, including: evolutionary computations [18], statistical mechanics [19], fuzzy systems [20], neural networks [21] and fuzzy neural networks. By using some analysis, it is selected evolutionary algorithm as the best candidate in this class to design our SVP solver algorithm based on. ...
... In fact, we believe that, the fuzzy systems and fuzzy neural networks may have some limited applications just in high level managing the SVP search techniques.  Even though, some studies such as [19] show the possible application of statistical mechanics (by applying the metropolis algorithm for SVP problem) in solving SVP, but we found some main drawbacks in this work which encourage us to correct/improve them with a better underlaying search technique.  Although the neural networks can be used for solving optimization problem, but in systematic process of designing a Hopefield network [21] for solving SVP with some standard design process, we found some main dificulties in the design steps! ...
... For a precise account on this method see for instance [21]. The following Metropolis algorithm is due to Ajitha, Biswas, and Kurur in [22]. This algorithm returns an approximate solution with respect to the euclidean norm, denoted by · . ...
... We see that the output of the LLL algorithm is near to the shortest vector which is 2 n−1 2 of the shortest vector for the given lattice, it can be used for polynomial factoring etc. LLL algorithm complexity is O n 6 · log 3 β , where β is max 1≤i≤n ||b i ||. We ran tests to compare LLL algorithm and BKZ algorithm on data from [22], [21] and [24]. ...
Article
Full-text available
Abstract: Lattice-based crypto systems are regarded as secure and believed to be secure even against quantum computers. lattice-based cryptography relies upon problems like the Shortest Vector Problem. Shortest Vector Problem is an instance of lattice problems that are used as a basis for secure cryptographic schemes. For more than 30 years now, the Shortest Vector Problem has been at the heart of a thriving research field and finding a new efficient algorithm turned out to be out of reach. This problem has a great many applications such as optimization, communication theory, cryptography, etc. This paper introduces the Shortest Vector Problem and other related problems such as the Closest Vector Problem. We present the average case and worst case hardness results for the Shortest Vector Problem. Further this work explore efficient algorithms solving the Shortest Vector Problem and present their efficiency. More precisely, this paper presents four algorithms: the Lenstra-Lenstra-Lovasz (LLL) algorithm, the Block Korkine-Zolotarev (BKZ) algorithm, a Metropolis algorithm, and a convex relaxation of SVP. The experimental results on various lattices show that the Metropolis algorithm works better than other algorithms with varying sizes of lattices.
Article
We study the performance of the Metropolis algorithm for the problem of finding a code word of weight less than or equal to M, given a generator matrix of an [n; κ]-binary linear code. The algorithm uses the set Sκ of all κ × κ invertible matrices as its search space where two elements are considered adjacent if one can be obtained from the other via an elementary row operation (i.e by adding one row to another or by swapping two rows.) We prove that the Markov chains associated with the Metropolis algorithm mix rapidly for suitable choices of the temperature parameter T. We ran the Metropolis algorithm for a number of codes and found that the algorithm performed very well in comparison to previously known experimental results.
Article
We show that approximating the shortest vector problem (in any £p norm) to within any constant factor less than ty2 is hard for NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NP-hardness result (i.e., hardness under deterministic many-one reductions) under a reasonable number theoretic conjecture on the distribution of square-free smooth numbers. As part of our proof, we give an alternative construction of Ajtai's constructive variant of Sauer's lemma that greatly simplifies Ajtai's original proof.
Article
We give a random class of lattices in Zn whose elements can be generated together with a short vector in them so that, if there is a probabilistic polynomial time algorithm which finds a short vector in a random lattice with a probability of at least 1/2 then there is also a probabilistic polynomial time algorithm which solves the following three lattice problems in every lattice in Zn with a probability exponentially close to one. (1) Find the length of a shortest nonzero vector in an n-dimensional lattice, approximately, up to a polynomial factor. (2) Find the shortest nonzero vector in an n-dimensional lattice L where the shortest vector v is unique in the sense that any other vector whose length is at most nc∥v∥ is parallel to v, where c is a sufficiently large absolute constant. (3) Find a basis b1, ..., bn in the n-dimensional lattice L whose length, defined as maxi=1n ∥bi∥, is the smallest possible up to a polynomial factor. We get the following corollaries: if for any of the mentioned worst-case problems there is no polynomial time probabilistic solution then (a) there is a one-way function (b) for any fixed 1/2 > ε > 0 there is a polynomial time computable function r(m) with mε ≤ log r(m) ≤ m2ε, so that the randomized subset sum problem: Σi=1m aixi ≡ b (mod r(m)), xi = 0, 1 for i = 1, ..., m, has no polynomial time probabilistic solution, where ai i = 1, ..., n and b are chosen at random with uniform distribution from the interval [1, r(m)].
Article
Ramsey theory is a fast-growing area of combinatorics with deep connections to other fields of mathematics such as topological dynamics, ergodic theory, mathematical logic, and algebra. The area of Ramsey theory dealing with Ramsey-type phenomena in higher dimensions is particularly useful. Introduction to Ramsey Spaces presents in a systematic way a method for building higher-dimensional Ramsey spaces from basic one-dimensional principles. It is the first book-length treatment of this area of Ramsey theory, and emphasizes applications for related and surrounding fields of mathematics, such as set theory, combinatorics, real and functional analysis, and topology. In order to facilitate accessibility, the book gives the method in its axiomatic form with examples that cover many important parts of Ramsey theory both finite and infinite. An exciting new direction for combinatorics, this book will interest graduate students and researchers working in mathematical subdisciplines requiring the mastery and practice of high-dimensional Ramsey theory.
Article
We show that approximating the shortest vector problem (in any $\ell_p$ norm) to within any constant factor less than $\sqrt[p]2$ is hard for NP under reverse unfaithful random reductions with inverse polynomial error probability. In particular, approximating the shortest vector problem is not in RP (random polynomial time), unless NP equals RP. We also prove a proper NP-hardness result (i.e., hardness under deterministic many-one reductions) under a reasonable number theoretic conjecture on the distribution of square-free smooth numbers. As part of our proof, we give an alternative construction of Ajtai's constructive variant of Sauer's lemma that greatly simplifies Ajtai's original proof.
Article
We show that the shortest vector problem in lattices with L 2 norm is NP-hard for randomized reductions. Moreover we also show that there is an absolute constant > 0 so that to nd a vector which is longer than the shortest non-zero vector by no more than a factor of 1 + 2 ?n (with respect to the L 2 norm) is also NP-hard for ran-domized reductions. The corresponding decision problem is NP-complete for randomized reductions.
Article
The famous lattice basis reduction algorithm of Lovász transforms a given integer lattice basis b1, …, bnϵZn into a reduced basis, and does this by O(n4log B) arithmetic operations on O(n log B)-bit integers. Here B bounds the euclidean length of the input vectors, i.e., . The new algorithm simulates the Lovász algorithm through approximate arithmetic. It uses at most O(n4log B) arithmetic operations on O(n + log B)-bit integers. For most practical cases reduction can be done without very large interger arithmetic but with floating point arithmetic instead.
Conference Paper
Building upon a famous result due to Ajtai, we propose a sequence of lattice bases with growing dimension, which can be expected to be hard instances of the shortest vector problem (SVP) and which can therefore be used to benchmark lattice reduction algorithms. The SVP is the basis of security for potentially post-quantum cryptosystems. We use our sequence of lattice bases to create a challenge, which may be helpful in determining appropriate parameters for these schemes.
Conference Paper
The famous lattice basis reduction algorithm of L. Lovász transforms a given integer lattice basis b1,...,bn n into a reduced basis, and does this by O(n4 log B) arithmetic operations on O(n log B)-bit integers. Here B bounds the euclidean length of the input vectors, i.e. b12,...,bn2 B. The new algorithm operates on integers with at most O(n + log B) bits and uses at most O(n4 log B) arithmetic operations on such integers. This reduces the number of bit operations for reduction by a factor n2 if n is proportional to log B and if standard arithmetic is used. For most practical cases reduction can be done without very large integer arithmetic but with floating point arithmetic instead.