To read the full-text of this research, you can request a copy directly from the authors.
... Two final consequences of Theorems 1.2 and 1.4 are bounds for the number of zeros in a unit interval and a short interval. These types of estimates have several applications in number theory, such as providing an effective disproof of the Mertens conjecture [Pin87;RK23], improving the error term in the explicit version of the Riemann-von Mangoldt formula [Dud16], and consequently obtaining improvements related to primes between consecutive cubes and consecutive powers [JCH24]. More generally, these two types of estimates are useful for any problems that require an estimate for the sum over the zeros of ζ(s) restricted to a certain range. ...
In this article, we improve the recent work of Hasanalizade, Shen, and Wong by establishing for every , where N(T) is the number of non-trivial zeros , with , of the Riemann zeta-function . The main source of improvement comes from implementing new subconvexity bounds for on some -lines inside the critical strip.
We use state-of-art lattice algorithms to improve the upper bound on the lowest counterexample to the Mertens conjecture to , which is significantly below the conjectured value of by Kotnik and van de Lune (Exp Math 13:473–481, 2004).
In this article, we study the Mertens conjecture. We revisit and improve the original constructive disproof of Jànos Pintz. We obtain a new lower bound for the minimal counterexample and new numerical results for this conjecture.
Let M(x)=∑1≤n≤xμ(n) where μ(n) is the Möbius function. The Mertens conjecture that for all x>1 was disproved in 1985 by Odlyzko and te Riele [13]. In the present paper, the known lower bound 1.06 for is raised to 1.218, and the known upper bound –1.009 for is lowered to –1.229. In addition, the explicit upper bound of Pintz [14] on the smallest number for which the Mertens conjecture is false, is reduced from to . Finally, new numerical evidence is presented for the conjecture that .
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of theL
3-algorithm of Lenstra, Lenstra, Lovsz (1982). We present a variant of theL
3-algorithm with deep insertions and a practical algorithm for block Korkin—Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
This paper presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved — with no significant slowdown with respect to other arbitrary-precision tools — and discuss a few applications where such a library can be useful. Categories and Subject Descriptors: D.3.0 [Programming Languages]: General—Standards; G.1.0 [Numerical Analysis]: General—computer arithmetic, multiple precision arithmetic; G.1.2 [Numerical Analysis]: Approximation—elementary and special function approximation; G 4 [Mathematics of Computing]: Mathematical Software—algorithm design, efficiency, portability
In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).
We describe a numerical experiment concerning the order of magnitude of {\small }, where {\small M(x)} is the Mertens function (the summatory function of the Möbius function). It is known that, if the Riemann hypothesis is true and all nontrivial zeros of the Riemann zeta-function are simple, {\small q(x)} can be approximated by a series of trigonometric functions of {\small }. We try to obtain an {\small }-estimate of the order of {\small q(x)} by searching for increasingly large extrema of the sum of the first {\small , }, and {\small } terms of this series. Based on the extrema found in the range {\small } we conjecture that {\small }.
The Mertens conjecture states that M(x) < x 1 /2 for all x > 1, where M(x) = n x S (n) , and (n) is the Mo .. bius function. This conjecture has attracted a substantial amount of interest in its almost 100 years of existence because its truth was known to imply the truth of the Riemann hypothesis. This paper disproves the Mertens conjecture by showing that x lim sup M(x) x - 1 /2 > 1.06 . The disproof relies on extensive computations with the zeros of the zeta function, and does not provide an explicit counterexample. Disproof of the Mertens Conjecture A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey 07974 USA and H. J. J. te Riele Centre for Mathematics and Computer Science Kruislaan 413 1098 SJ Amsterdam The Netherlands 1. Introduction Let (n) denote the Mo .. bius function, so that (n) = 0 , p 2 n for some prime p , ( - 1) k , n = i = 1 P k p i , p i distinct primes , 1 , n = 1 , and let M(x) = nx S (n) . (1.1) Then M(x) is the dif...
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is for lattices of dimension n, bounding the cost of size reduction, matrix multiplication, and QR factorization, and C bounding the log of the condition number of the input basis B. This yields a running time of for precision in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Lattice-based cryptography relies on generating random bases which are difficult to fully reduce. Given a lattice basis (such as the private basis for a cryptosystem), all other bases are related by multiplication by matrices in . We compare the strengths of various methods to sample random elements of , finding some are stronger than others with respect to the problem of recognizing rotations of the lattice. In particular, the standard algorithm of multiplying unipotent generators together (as implemented in Magma’s RandomSLnZ command) generates instances of this last problem which can be efficiently broken, even in dimensions nearing 1,500. Likewise, we find that the random basis generation method in one of the NIST Post-Quantum Cryptography competition submissions (DRS) generates instances which can be efficiently broken, even at its 256-bit security settings. Other random basis generation algorithms (some older, some newer) are described which appear to be much stronger.
The Mertens function is defined as , where is the M\"obius function. The Mertens conjecture states for , which was proven false in 1985 by showing and . The same techniques used were revisited here with present day hardware and algorithms, giving improved lower and upper bounds of and 1.826054. In addition, M(x) was computed for all , recording all extrema, all zeros, and values sampled at a regular interval. Lastly, an algorithm to compute M(x) in time was used on all powers of two up to .
In this paper, we investigate a variant of the BKZ algorithm, called progressive BKZ, which performs BKZ reductions by starting with a small blocksize and gradually switching to larger blocks as the process continues. We discuss techniques to accelerate the speed of the progressive BKZ algorithm by optimizing the following parameters: blocksize, searching radius and probability for pruning of the local enumeration algorithm, and the constant in the geometric series assumption (GSA). We then propose a simulator for predicting the length of the Gram-Schmidt basis obtained from the BKZ reduction. We also present a model for estimating the computational cost of the proposed progressive BKZ by considering the efficient implementation of the local enumeration algorithm and the LLL algorithm. Finally, we compare the cost of the proposed progressive BKZ with that of other algorithms using instances from the Darmstadt SVP Challenge. The proposed algorithm is approximately 50 times faster than BKZ 2.0 (proposed by Chen-Nguyen) for solving the SVP Challenge up to 160 dimensions.
Bien que relativement récente, la cryptographie à base de réseaux euclidienss’est distinguée sur de nombreux points, que ce soit par la richesse des constructionsqu’elle permet, par sa résistance supposée à l’avènement des ordinateurs quantiquesou par la rapidité dont elle fait preuve lorsqu’instanciée sur certaines classes deréseaux.Un des outils les plus puissants de la cryptographie sur les réseaux est le Gaussiansampling. À très haut niveau, il permet de prouver qu’on connaît une base particulièred’un réseau, et ce sans dévoiler la moindre information sur cette base. Il permet deréaliser une grande variété de cryptosystèmes. De manière quelque peu surprenante,on dispose de peu d’instanciations pratiques de ces schémas cryptographiques, et lesalgorithmes permettant d’effectuer du Gaussian sampling sont peu étudiés.Le but de cette thèse est de combler le fossé qui existe entre la théorie et lapratique du Gaussian sampling. Dans un premier temps, nous étudions et amélioronsles algorithmes existants, à la fois par une analyse statistique et une approchegéométrique. Puis nous exploitons les structures sous-tendant de nombreuses classesde réseaux, ce qui nous permet d’appliquer à un algorithme de Gaussian sampling lesidées de la transformée de Fourier rapide, passant ainsi d’une complexité quadratiqueà quasilinéaire.Enfin, nous utilisons le Gaussian sampling en pratique et instancions un schémade signature et un schéma de chiffrement basé sur l’identité. Le premier fournit dessignatures qui sont les plus compactes obtenues avec les réseaux à l’heure actuelle,et le deuxième permet de chiffrer et de déchiffrer à une vitesse près de mille foissupérieure à celle obtenue en utilisant un schéma à base de couplages sur les courbeselliptiques.
The Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL or L3) is a very popular tool in public-key cryptanalysis and in many other fields. Given an integer d-dimensional lattice basis with vectors of norm less thanB in an n-dimensional space, L3 outputs a so-called L3-reduced basis in polynomial time O(d
5
n log3
B), using arithmetic operations on integers of bit-length O(d log B). This worst-case complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L3 is almost never used in practice. Instead, one applies floating-point variants of L3, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in L3) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point
L3 is not even guaranteed to terminate, and the output basis may not be L3-reduced at all. In this article, we introduce the L2algorithm, a new and natural floating-point variant of L3 which provably outputs L3-reduced bases in polynomial time O(d
4
n (d + log B) log B). This is the first L3 algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the well-known Euclidean and Gaussian algorithms, which it generalizes.
KeywordsLLL-L3
-Lattice Reduction-Public-Key Cryptanalysis
Answering a question of Vera Sós, we show how Lovász’ lattice reduction can be used to find a point of a given lattice, nearest
within a factor ofc
d
(c = const.) to a given point in R
d
. We prove that each of two straightforward fast heuristic procedures achieves this goal when applied to a lattice given by
a Lovász-reduced basis. The verification of one of them requires proving a geometric feature of Lovász-reduced bases: ac
1
d
lower bound on the angle between any member of the basis and the hyperplane generated by the other members, wherec
1 = √2/3.
As an application, we obtain a solution to the nonhomogeneous simultaneous diophantine approximation problem, optimal within
a factor ofC
d
.
In another application, we improve the Grötschel-Lovász-Schrijver version of H. W. Lenstra’s integer linear programming algorithm.
The algorithms, when applied to rational input vectors, run in polynomial time.
Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worst-case proved bounds, both in terms of the running time and the output quality. In this article, we investigate this puzzling statement by trying to model the average case of lattice reduction algorithms, starting with the celebrated Lenstra-Lenstra-Lovasz algorithm (L3). We discuss what is meant by lattice reduction on the average, and we present extensive experiments on the average case behavior of L3, in order to give a clearer picture of the differences/similarities between the average and worst cases. Our work is intended to clarify the practical behavior of L3 and to raise theoretical questions on its average behavior.
The best lattice reduction algorithm known in practice for high dimension is Schnorr-Euchner’s BKZ: all security estimates of lattice cryptosystems are based on NTL’s old implementation of BKZ. However, recent progress on lattice enumeration suggests that BKZ and its NTL implementation are no longer optimal, but the precise impact on security estimates was unclear. We assess this impact thanks to extensive experiments with BKZ 2.0, the first state-of-the-art implementation of BKZ incorporating recent improvements, such as Gama-Nguyen-Regev pruning. We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates. For instance, our simulation suggests that the smallest NTRUSign parameter set, which was claimed to provide at least 93-bit security against key-recovery lattice attacks, actually offers at most 65-bit security.