Content uploaded by Somenath Biswas

Author content

All content in this area was uploaded by Somenath Biswas on Sep 25, 2015

Content may be subject to copyright.

Metropolis Algorithm for solving Shortest Lattice

Vector Problem(SVP)

Ajitha Shenoy K B

Department of Computer

Science and Engineering

Indian Institute of Technology

Kanpur, INDIA

ajith@cse.iitk.ac.in

Somenath Biswas

Department of Computer

Science and Engineering

Indian Institute of Technology

Kanpur, INDIA

sb@cse.iitk.ac.in

Piyush P Kurur

Department of Computer

Science and Engineering

Indian Institute of Technology

Kanpur, INDIA

ppk@cse.iitk.ac.in

Abstract—In this paper we study the suitability of the Metropo-

lis Algorithm and its generalization for solving the shortest

lattice vector(SVP) problem. SVP has numerous applications

spanning from robotics to computational number theory, viz.,

polynomial factorization. At the same time, SVP is a notoriously

hard problem. Not only it is NP-hard, there is not even any

polynomial approximation known for the problem that runs in

polynomial time. What one normally uses is the LLL algorithm

which, although a polynomial time algorithm, may give solutions

which are an exponential factor away from the optimum. In

this paper, we have deﬁned an appropriate search space for

the problem which we use for implementation of the Metropolis

algorithm. We have deﬁned a suitable neighbourhood structure

which makes the diameter of the space polynomially bounded,

and we ensure that each search point has only polynomially

many neighbours. We can use this search space formulation for

some other classes of evolutionary algorithms, e.g., for genetic

and go-with-the-winner algorithms. We have implemented the

Metropolis algorithm and Hasting’s generalization of Metropolis

algorithm for the SVP. Our results are quite encouraging in all

instances when compared with LLL algorithm.

Index Terms—SVP, Search Space, Metropolis Algorithm, Hast-

ing’s Generalization, LLL.

I. INTRODUCTION

We investigate in this paper the suitability of using the

Metropolis algorithm [1][2] to solve the shortest lattice vector

problem (SVP, for short). The Metropolis algorithm is a

widely used randomized search heuristic and is often used in

practice to solve optimization problems. It is known that the

algorithm performs surprisingly well even for some provably

hard problems; e.g, [2] showed that the Metropolis algorithm is

efﬁcient for random instances of the graph bisection problem.

It is, therefore, of interest to investigate the performance of

the algorithm for SVP, which is another hard problem of great

interest, both from the theoretical and practice considerations.

Van Emde Boas [3] proved in 1981 that SVP is NP-hard

for the ∞norm and mentioned that the same should be true

for any pnorm. However, proving NP-hardness in the 2norm

(or in any ﬁnite pnorm ) was an open problem for a long

time. A breakthrough result by Ajtai[4] in 1998 ﬁnally showed

that SVP is NP-hard under randomized reductions. Another

breakthrough by Micciancio [5]in 2001 showed that SVP is

hard to approximate within some constant factor, speciﬁcally

any factor less than √2. This was the best result known so far

leaving a huge gap between the √2hardness factor and the

exponential approximation factors achieved by Lenstra et. al.

[6] in 1982, Schnorr [7] in 1988, and Ajtai et. al. [8] in 2003.

At the same time, there are many situations in practice which

require us to get at least a good solution for SVP, because

this is an essential step in most algorithms for factorizing

polynomials.

The structure of the paper is as follows: in the following

section, we deﬁne SVP, in Section 3 we deﬁne an appropriate

search space for our approach to solve SVP, in Section 4

we show how we use our search space to implement the

Metropolis and Hasting’s generalization of the Metropolis

algorithm, we also mention why the latter is more appropriate

for our search space. Section 5 provides some experimental

data on how our implementation compares with a standard

implementation of the LLL algorithm. The paper ends with

some concluding remarks.

II. SHORTEST VEC TOR PROB LE M (SVP)

Deﬁnition 2.1 (Lattices): Let B={b1,b2,...,bn}be a

set of linearly independent vectors in m-dimensional Eu-

clidean space Rmwhere m≥n. The set L(B)of all vectors

a1b1+. . . +anbn,ai’s varying over integers, is called the

integer lattice, or simply, the lattice with basis B(or generated

by B) and nis the dimension of L(B). If m=n, we say

that the lattice is of full dimension.

In this paper, we consider only full dimensional lattices.

Furthermore, we consider only lattices whose basis vectors

have rational components. In such a case, we can clear the

denominators and assume that the each of the basis element

is a vector in Zinstead of R.

The basis can be compactly represented as an n×nmatrix

(also denoted as B) with columns being the basis vectors

b1,...,bnas its columns. Then we can write L(B) =

{Ba:a∈Zn}

Problem 2.2 (Shortest Lattice Vector Problem (SVP)):

Given a lattice L(B)contained in Znspeciﬁed by linearly

independent vectors b1,...,bn, the SVP problem is to ﬁnd

a shortest (in Euclidean norm) non-zero vector of L(B).

Without loss of generality, we consider the decision (and the

search) version of the above problem: Given a a basis Bas

above, and a rational number K > 0, the problem is to decide

if there is non-zero vector vthat belongs to L(B)such that

||v|| < K where ||v|| denotes the Euclidean norm of v, and

if the answer is ’yes’, output such a vector.

Clearly, SVP can be solved by logarithmically many appli-

cations of the decision version of the problem.

III. SEARCH SPACE F OR SVP

The working of the Metropolis algorithm on an instance

can be viewed as a random walk with a bias on a ﬁnite

neighbourhood structure of states. Each state of the structure

represents a feasible solution of the optimization problem

instance being solved, and the structure has a goal state, the

optimum point, which the algorithm intends to locate. For

minimization problems, as our problem, SVP, is, each point

in the structure has a cost, and the goal state is the state with

minimum cost (or, for decision versions, the state with cost less

than or equal to a given speciﬁed cost). At any given step, the

algorithm is at one of the points in the search space, it selects

one of the neighbouring points and then transits to that point.

The point is selected probabilistically, the bias ensures that the

algorithm would reach the goal state eventually without getting

stuck at local minima. For the Metropolis algorithm to run

efﬁciently, it is necessary that the neighbourhood structure for

an instance to satisfy (a) there should be at most exponentially

(in instance size) many elements in the structure, (b) the

diameter of the structure should be bounded above by a ﬁxed

polynomial in the instance size, (c) the set of neighbours of

any element should be computable in time polynomial in the

instance size, and similarly, the cost of any state also should

be computable efﬁciently.

For justifying the way we deﬁne our search space for the

SVP, we need the following result.

Proposition 3.1: Let Bbe an n×nnon-singular matrix and

let ube a n×1vector, kuk≤ k,kbeing a non-zero constant.

If there is an integer vector wsuch that Bw=u, then the

magnitude of every component of wis bounded above by M,

M= (αn)n, where αdenotes the largest value amongst the

magnitudes of all the elements of Band ktaken together.

The proof follows easily from the Cramer’s rule, noting that

ﬁrst, the determinant of Bis 6= 0, and second, if βis the

largest magnitude of all the entries of an n×nmatrix Y, then

det Y≤(nβ)n

Deﬁnition 3.2: [Search Space for SVP] Let B, an n×n

matrix, be the basis of a lattice Land kbe a given constant.

Our goal is to look for a lattice vector of norm kor less. The

search space for this instance of the SVP is as follows, where

Mis as in Proposition 3.1, and mis a parameter as ﬁxed in

the implementation. (mcan be a ﬁxed as a constant for all

instances, or, more usually, it will be a ﬁxed multiple of n.)

1) [Search space elements] The search space elements

consist of matrices of the form A0= [A|I], where I

is the n×nidentity matrix and Ais an n×mmatrix

with all entries of magnitude bounded above by M,M

as in Proposition 3.1.

2) [Deﬁnition of neighbourhood] For two elements R0and

S0, the latter is a neighbour of the former if Scan be

obtained from Rby any of the following elementary

operations:

a) By swapping two columns of R,

b) By multiplying a column of Rby −1,

c) By adding a power of 2multiple of one column of

R0to another column of R, provided the resultant

column satisﬁes that the magnitude of each of its

components is less than equal to M. In particular,

ri←r0

i±c×r0

j,(i6=j,1≤i≤m, 1≤j≤m+n,

and c, a positive integer, where c= 20,...,2k,

k=n·log (αn). (For a matrix R, ridenotes its

ith column.)(As stated already, this operation is

allowed only if the component magnitude condition

is satisﬁed.)

3) [Cost associated with a search space element] For an

element A0= [A|I]of the search space, its cost c(A0)

is deﬁned to be t, where tis the norm of that vector

vwhich has the smallest norm amongst the (n+m)

vectors B[A|I]. (In other words, by pre-multiplying the

basis matrix Bto [A|I], we obtain an n×mmatrix; tis

the norm of the column vector with least norm amongst

these n+mcolumn vectors of the matrix B[A|I].)

The following Proposition follows easily from the the way

we have deﬁned our search space.

Proposition 3.3: Let the n×nmatrix Bbe a basis of the

lattice L, and kis a given constant for the SVP instance.

1) For every element [A|I]of the search space, each of

the (n+m)(column) vectors of the matrix B[A|I]is a

vector of the lattice L. Also, the (m+n)column vectors

of B[A|I]generate the lattice L.

2) If Lcontains a vector of norm kor less, then the search

space for SVP with L, k, will contain an element [A|I]

such that one of the (m+n)column vectors of B[A|I]

will be of norm kor less. (The search space uses Mas

deﬁned in Proposition 3.1.)

The ﬁrst part is obvious, as every column vector of B[A|I]

is an integer linear combination of the nlattice vectors. Also,

as the vectors of Bare contained in the vectors of B[A|I],

therefore, Band B[A|I]both generate the same lattice (Here,

we use the assumption that Lis full dimensional). The second

part of the Proposition also follows easily: we know from

Proposition 3.1 that if there is a lattice vector of norm kor

less, then there is a v, the magnitude of each component of

vbounded by M, such that the norm of Bvis kor less. Our

search space will have a member [A|I]with Acontaining v,

as the latter can be obtained from the elementary operations

we allow from the identity matrix I.

We now show that our search space deﬁnition satisﬁes the

requirements we stated at the start of the Section. First we

prove that every search space element has at most polynomi-

ally many neighbours.

Theorem 3.4: Let n, m be as in Deﬁnition 3.2 and Mbe

as in Proposition 3.1. The number of neighbours for any

node A0in the search space is O(m2log M). (As log Mis

nlog (αn), and log αbeing the number of bits required to

specify the largest magnitude number in the problem instance,

we therefore have that every element has at most polynomially

many neighbours.)

The proof follows by noting that a search space element has

at most mC2neighbours through the ﬁrst kind of elementary

operation, mthrough the second kind, and mC2×2 (k+ 1) +

2mn (k+ 1) of the third kind, where kis O(log M).

Next, our goal is to show that the search space has a

polynomially bounded diameter.

Theorem 3.5: There is an O(mn log M)-length path be-

tween the elements A0= [A|I]and B0= [B|I](and vice

versa), where A0and B0are any two elements in the search

space.

Proof: Let us ﬁrst show how we can replace the ith

column aiof Awith bi, the ith column of B. In the ﬁrst

stage, using elementary operations, we get ejin place of ai,

where ejdenote jth column of the Identity matrix I(Since

Ahas m≥ncolumns and mis multiple of n,i=qn +j

for some q∈Z, where 1≤j≤n,j=nif i=qn). We

have to set jth component of aito 1 and other component

of aito 0. Suppose that the rth component of aiwas x.

Let x=c0x0+. . . +ckxk, where each ciis 2iand each

xiis 0or 1, and kis O(log M). For r6=jwe set the rth

component to zero by performing the elementary operations

ai←ai−xerin atmost k+1 elementary operation. For r=j

we set the component to one by performing the elementary

operation ai←ai−(x−1)ejin at most (k+ 1) elementary

operations, since x−1 = y= 20y0+. . .+ 2kyk, where each yt

is 0or 1,0≤t≤k. So total number of elementary operations

to set each component of aiis bounded by n(k+1) elementary

operation. Therefore the total number of elementary operations

to set aifor 1≤i≤mis bounded by mn(k+ 1). Now in the

second stage, using elementary operations, we get biin place

of ai=ej. Let rth component of bibe z= 20z0+. . . +2kzk,

where each ztis 0or 1,0≤t≤k. If r=jby performing

the elementary operation ai←ai+ (z−1)ej, we can set

jth component of aito jth component of biin atmost k+ 1

elementary operations. For r6=jwe set the rth component of

aito rth component of biby performing elementary operation

ai←ai+zer. This implies that we can set aito biin atmost

n(k+ 1) elementary operations. Hence we can set aito bifor

all 1≤i≤min atmost nm(k+ 1) elementary operations.

Therefore we can set Ato Bin atmost 2nm(k+1) elementary

operations. i.e. O(mn log M). Hence the proof.

We have proved that the entries of Ais bounded by O(αnn)

and Theorem 3.5 suggest that there exists a path using which

we can reach any node in the search space. Hence our

search space Deﬁnition 3.2 ensures that entries of intermediate

matrices will not grow exponentially and we can also reach

from one state to another with in polynomial number of steps.

We are always interested in ﬁnding shortest non zero vector in

the lattice but there are chances that we may get zero vector

while applying the elementary operations deﬁned in Deﬁnition

3.2 on the matrix A0= [A|I]. To avoid this, we can deﬁne a

cost of zero vector as inﬁnity which prevents from moving to

such neighbours due to its very high cost. Let us now deﬁne

the Metropolis algorithm for SVP.

IV. MET ROPOLIS ALGORITHM

The pseudo-code of the Metropolis algorithm is given below

(Algorithm 1). As mentioned before, the metropolis algorithm

is the execution of a Markov process. It is therefore completely

deﬁned once the transition probabilities are deﬁned.

Consider a search space and neighbourhood structures as

deﬁned in Deﬁnition 3.2. Observe that only one row of

current solution will be changed by the elementary operations

performed on it. The cost function can be modiﬁed as follows:

Let R0be the current solution and S0be the new solution.

S0is obtained from R0by applying one of the elementary

transformation as deﬁned in the Deﬁnition 3.2 to the rth

column of R. Hence, the cost function c(R0)is the Euclidian

norm of the rth column vector of B∗Rand c(S0)is the

Eulidian norm of the rth column vector of B∗S.

The Metropolis algorithm on instance R0= [R|I]runs a

Markov chain XR0

= (XR0

1, XR0

2, . . .), using the temperature

parameter T. The state space of the chain is the set SR0

of the

feasible solutions of R0. Let ddenote the degree of the node

in a search graph where d=O(m2·log M)as in Theorem

3.4. Let R0and S0denote any two feasible solutions and

neighbourhood of R0is denoted by N(R0). Then the transition

probabilities are as follows:

qR0S0=

0if R06=S0and S0/∈N(R0)

e

−c(S

0

)−c(R

0

)/T

dif c(S0)> c(R0)and R0∈N(R0)

1

dif c(R0)≥c(S0)and S0∈N(R0)

1−PJ06=R0qJ0R0if R0=S0

The complete algorithm (Algorithm 1) is give below.

HAS TI NG ’SGENERALIZATION OF METROP OL IS

ALGORITHM

The way the Metropolis algorithm decides about moving

from the current state sito a state in the neighbourhood can

be seen as a two stage process: ﬁrst, choose a neighbour sj

uniformly at random (the proposal stage), and then, with a

probability αwhich depends upon the relative costs of the

solutions associated with sjand si, move to sjor remain at

si(the acceptance stage).

In our case, the neighbours of a state [A|I]are [A0|I]’s

where A0is obtained by performing an elementary operation

using the vectors in Aand I. Some of the elementary op-

erations represent what we call long jumps because a vector

vis replaced by another uwhere there is a large difference

in the norms of vand u. This happens when vis replaced

by v±cwwhen the constant cis large. It is desirable

to have a control on how extensively our algorithm will

make use of such long jumps. This is not possible in the

Algorithm 1 Metropolis Algorithm

1: Input : B←Basis for the lattice Land a rational num-

ber K

2: Output : Matrix R0such that B∗R0contains a vector v

with ||v|| ≤ K.

3: Let I←n×nIdentity matrix. Let R0= [R|I]be the

starting state in the search space as in Deﬁnition 3.2 and

c(R0)denote cost of R0as deﬁned in the beginning of

this section.

4: Set BestN orm =c(R0)

5: while BestN orm > K do

6: Select any one of the neighbour S0of R0uniformly at

random by performing one of the elementary operation

as deﬁned in Deﬁnition 3.2

7: if BestN orm > c(S0)then

8: BestN orm =c(S0)

9: end if

10: Set R0=S0with probability

α= min e−c(S0)/T

e−c(R0)/T ,1!

11: end while

standard Metropolis algorithm as the proposal stage will chose

a neighbour uniformly at random.

To overcome this problem, we make use of the Hasting’s

generalization [9] of the Metropolis algorithm. In this gener-

alization, we can use any probability to select the neighbour

of a state in the proposal stage. Let qxz denote the probability

by which we select a neighbour zwhen the current state is x.

Let xbe a state. If y1, . . . , ynxbe neighbours the neighbours

of x. Then

qxz =

0if x6=zand z /∈N(x)

θif x=z,

riif z=yi

,

where the values rican be chosen appropriately depending

on how much we want to invest on each of the strategy.

The Hasting’s generalized metropolis algorithm M2runs on

the same state space but has a different transition probability:

Suppose the chain M2is at a state the state xat some step.

Then

1) With probability qxz,M2selects a state zin the neigh-

bourhood.

2) If z=xthen the next state of M2is x.

3) If z=yi, we ﬁrst compute αdeﬁned as

α= min e−c(yi)/T ·qyix

e−c(x)/T ·qxyi

,1

Here, for any state z,c(z)represents the cost of the

candidate solution of zand Tis a ﬁxed temperature

parameter.

4) We move to yiwith probability αelse we remain in the

present state x.

It can be veriﬁed easily that the chain M2is time-reversible

and the in its stationary distribution, the probability of x,πx

is given by:

πx=e−c(x)/T

Z,

where Zis the normalizing factor Piπi. The chain M2

is the Hasting’s generalization. This chain has the same

stationary distribution as the usual Metropolis algorithm, but

has the ﬂexibility of ﬁne tuning the probability of choosing a

neighbour to reﬂect the structure of the problem at hand. In

our implementation, we shall keep qxyithe same as qyix. The

detailed algorithm (Algorithm 2) is given below. In the next

section we will compare the results of our algorithm with that

of LLL algorithm.

Algorithm 2 Hasting’s Generalization

1: Input : B←Basis for the lattice Land a rational num-

ber K

2: Output : Matrix R0such that B∗R0contains a vector v

with ||v|| ≤ K.

3: Let I←n×nIdentity matrix. Let R0= [R|I]be the

starting state in the search space as in Deﬁnition 3.2 and

c(R0)denote cost of R0as deﬁned in the beginning of

this section. Let ddenote total number of neighbours as

in Theorem 3.4

4: Set BestN orm =c(R0)

5: while BestN orm > K do

6: Select any one of the neighbour S0of R0by perform-

ing one of the elementary operations deﬁned below.

•Swap two columns of Rwith probability mC2

d,

•Multiply a column of Rby −1with probability m

d

•Add a power of 2times a column of R0to another

column of Ri.e. in particular, ri←r0

i±c×r0

j,(i6=j,

1≤i≤m, 1≤j≤m+n, where c= 20,...,2k,

k=n·log (αn)) with probability d−mC2−m

d·Pi,

where Pidenote probability of selecting the value of

c= 2iand Pk

i=0 Pi= 1.

[We can use more than one probability distribution

to select values for c. In our implementation we have

selected two probability distributions Pi=1

k+1 and

Qi= 2(k+ 1 −i)/(k+ 1)(k+ 2) to select values for

c. We will keep on changing our selection probability

distribution with Piand Qifor every selected number

of steps(500 steps).]

7: if BestN orm > c(S0)then

8: BestN orm =c(S0)

9: end if

10: Set R0=S0with probability

α= min e−c(S0)/T

e−c(R0)/T ,1!

11: end while

TABLE I

COMPARISON OF RESULT S LLL : M ET ROPO LI S (DATA TAKE N FRO M [10])

Lattice Dim. Best Norm CPU Time Input

Type n Found in seconds size

(LLL : Our Algo) (LLL : Our Algo) in bits

Swift 8 4.242 : 4.123 0.04 : 0.004 8

NTRU 8 4.358 : 3.6055 0 : 0.004 8

Modular 10 2.449 : 2.449 0.04 : 0.024 8

Duarl 10 3.6055 : 3.6055 0.004 : 0.004 8

Modular

Random 10 2.828 : 2.645 0.004 : 0.012 8

V. RE SU LTS

In this section we describe how our algorithm compares

with the celebrated LLL [6] algorithm on benchmark instances

SVPs’. The LLL algorithm computes a vector which is at most

2n−1

2of the shortest vector of the lattice and has a complexity

of On6·log3α, where αis max1≤i≤n||bi||. Although it

does not compute the shortest vector, the output vector is short

enough for applications like polynomial factoring.

We now describe the benchmark lattices that we ran this

algorithm on. A class of SVP instances are generated using

the techniques developed by Richard Lindner and Michael

Schneider [10]. They have given sample bases for Modular,

Random, ntru, SWIFT and Dual Modular lattices of dimension

10. We have tested our code for all these instances and found

that our algorithm works faster and gives shorter lattice vector

when compared to LLL. The tested results are given in the

Table I

Based on the result by Ajtai [11], Johannes Buchmann,

Richard Lindner, Markus Ruckert and Michael Schneider [12]

[13] constructed a family of lattices for which ﬁnding the

short vector implies being able to solve difﬁcult computational

problems in all lattices of a certain smaller dimension. For

completeness we give a quick description of these family.

Deﬁnition 5.1: Let nbe any positive integer greater than

50,c1,c2be any two positive real numbers such that c1>2

and c2≤c1ln 2 −ln 2

50·ln 50 . Let m=c1·n·ln nand q=nc2.

For a matrix X∈Zn×m, with column vectors x1, . . . , xm, let

L(c1, c2, n, X) = ((v1, . . . , vm)∈Zm|

m

X

i=1

vixi≡0mod q).

All lattices in the set L(c1, c2, n, .) = {L(c1, c2, n, X)|X∈

Zn×m

q}are of dimension mand the family of lattices Lis the

set of all L(c1, c2, n, .)

They also proved that all lattices in L(c1, c2, n, .)of the

family Lcontain a vector with Euclidean norm less than √m

and it is hard to ﬁnd such vector. The challenge is to try

different means to ﬁnd a short vector. The Challenge is deﬁned

in the following deﬁnition:

Deﬁnition 5.2 (Lattice Challenge:): Given lattice basis of

lattice Lm, together with a norm bound ν. Initially set ν=

TABLE II

COMPARISON OF RESULT S LLL : M ET ROPO LI S (DATA TAKE N FRO M

[13][12]: TOY CHAL LE NGE )

Dimension Best Norm CPU Time Input size

n Found in seconds in bits

(LLL : Our Algo) (LLL : Our Algo)

15 1234.58 : 1147.2279 0.016 : 201.651 150

20 3 : 2.8284 0.2680 : 0.052 8

25 1.73 : 1.73 0.008 : 0.004 8

30 4.123 : 3.8729 0.008 : 0.008 8

50 20.49 : 8.66 0.108 : 291.892 100

d√me. The goal is to ﬁnd a vector v∈Lm, with ||v||2≤ν.

Each solution vto the challenge decreases νto ||v||2.

We have tested our algorithm for toy challenges(i.e. with

m≤50) and the comparison results with LLL is listed in

Table II.

An important step in the LLL algorithm is the computation

of the Gram-Schmidt orthogonalisation of the basis in hand.

In practical implementations, this computation is done using

ﬂoating point numbers instead of multiprecision arithmetic to

speed up computation. We apply a similar technique here. At

each step our transition probabilities are based on the value

of the objective function, which is the length of the smallest

vector in the current solution. We compute this length using

ﬂoating point arithmetic instead of the full multiprecission

arithmetic.

Our results are very encouraging. For all the examples

considered, we found that our algorithm performs well either

in value or in time and often in both than LLL. When the

number of bits used to represent integer value is more than

100 bits we found that LLL is more faster than our algorithm

but our algorithm gives shorter vector than LLL.

VI. CONCLUSION

In this paper we have considered the use of the Metropolis

algorithm, and its generalization due to Hastings, for solving

SVP, a well-known, hard combinatorial optimization problem.

To the best of our knowledge, this is the ﬁrst such attempt. Our

approach rests on an appropriate deﬁnition of a search space

for the problem, which can be used for some other classes

of evolutionary algorithms as well, e.g., genetic algorithm

and the go-with-the-winner algorithm. We have compared the

performance of our implementation with that of a standard

implementation of the LLL algorithm; and the results we have

obtained are fairly encouraging. Given this experience, it is

worth while to explore if it can be shown that our approach

is efﬁcient for random instances of SVP.

REFERENCES

[1] S. Sanyal, S. Raja, and S. Biswas, “Necessary and sufﬁcient conditions

for success of the metropolis algorithm for optimization,” in GECCO’10,

Copyright ACM, Portland, USA, 2010.

[2] T. Carson, “Emperical and analytic approaches to understanding local

search heuristics,” in PhD Thesis, University of California, San Diego,

2001.

[3] P. V. E. Boas, “Another np-complete problem and the complexity of

computing short vector in a lattice,” in Tech. rep 8104, University of

Amsterdam, Department of Mathematics, Netherlands, 1981.

[4] M. Ajtai, “The shortest vector problem in l2is np-hard for random-

ized reductions,” in STOC 98: Proceedings of the 30th Annual ACM

Symposium on Theory of Computing, New York, NY, USA, 1998, pp.

10–19.

[5] D. Micciancio, “The shortest vector in a lattice is hard to approximate

to within some constant,” SIAM Journal of Computing, vol. 30(6), pp.

2008–2035, 2001.

[6] A. Lenstra, H. W. L. Jr., and L. Lovasz, “Factoring polynomials with

rational coefﬁcients,” Mathematische Annalen, vol. 261(4), pp. 515–534,

1982.

[7] C. Schnorr, “A more efﬁcient algorithm for lattice basis reduction,”

Journal of Algorithms, vol. 9(1), pp. 47–62, 1988.

[8] M. Ajtai, “The worst-case behavior of schnorr’s algorithm approximating

the shortest nonzero vector in a lattice,” in STOC-03: Proceedings of the

35th Annual ACM Symposium on Theory of Computing, ACM Press,

2003, pp. 396–406.

[9] M. Mitzenmacher and E. Upfal, Probability and Computing: Random-

ized Algorithms and Probabilistic Analysis. Page 269: Cambridge

University Press, 2005.

[10] S. reference v4.7, “Cryptography,” www.sagemath.org/doc/reference/

sage/crypto/lattice.html.

[11] M. Ajtai, “Generating hard instances of lattice problems (extended

abstract),” in Proceedings of the twenty-eighth annual ACM symposium

on Theory of computing, STOC’96, ACM, New York, NY, USA, 1996.

[12] J. Buchmann, R. Lindner, M. Ruckert, and M. schneider, “Explicit hard

instances of the shortest vector problem,” in PQ Crypto, 2nd Internation

Workshop on Post Quantum Cryptography, LNCS 5299, 2008, pp. 79–

94.

[13] T. Darmstadt, “Lattice challenge,” www.latticechallenge.org.