Article

A New Upper Bound on the Smallest Counterexample to the Mertens Conjecture

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Two final consequences of Theorems 1.2 and 1.4 are bounds for the number of zeros in a unit interval and a short interval. These types of estimates have several applications in number theory, such as providing an effective disproof of the Mertens conjecture [Pin87;RK23], improving the error term in the explicit version of the Riemann-von Mangoldt formula [Dud16], and consequently obtaining improvements related to primes between consecutive cubes and consecutive powers [JCH24]. More generally, these two types of estimates are useful for any problems that require an estimate for the sum over the zeros of ζ(s) restricted to a certain range. ...
Preprint
In this article, we improve the recent work of Hasanalizade, Shen, and Wong by establishing N(T)T2πlog(T2πe)0.10076logT+0.24460loglogT+8.08292\left| N (T) - \frac{T}{ 2 \pi} \log \left( \frac{T}{2\pi e}\right) \right|\le 0.10076\log T+0.24460\log\log T+8.08292 for every TeT\ge e, where N(T) is the number of non-trivial zeros ρ=β+iγ\rho=\beta+i\gamma, with 0<γT0<\gamma \le T, of the Riemann zeta-function ζ(s)\zeta(s). The main source of improvement comes from implementing new subconvexity bounds for ζ(σ+it)\zeta(\sigma+it) on some σk\sigma_k-lines inside the critical strip.
Article
Full-text available
We use state-of-art lattice algorithms to improve the upper bound on the lowest counterexample to the Mertens conjecture to exp(1.96×1019)\approx \exp (1.96 \times 10^{19}), which is significantly below the conjectured value of exp(5.15×1023)\approx \exp (5.15 \times 10^{23}) by Kotnik and van de Lune (Exp Math 13:473–481, 2004).
Article
Full-text available
In this article, we study the Mertens conjecture. We revisit and improve the original constructive disproof of Jànos Pintz. We obtain a new lower bound for the minimal counterexample and new numerical results for this conjecture.
Conference Paper
Full-text available
Let M(x)=∑1≤n≤xμ(n) where μ(n) is the Möbius function. The Mertens conjecture that M(x)/x<1|M(x)|/\sqrt{x}<1 for all x>1 was disproved in 1985 by Odlyzko and te Riele [13]. In the present paper, the known lower bound 1.06 for lim supM(x)/x\limsup M(x)/\sqrt{x} is raised to 1.218, and the known upper bound –1.009 for lim infM(x)/x\liminf M(x)/\sqrt{x} is lowered to –1.229. In addition, the explicit upper bound of Pintz [14] on the smallest number for which the Mertens conjecture is false, is reduced from exp(3.21×1064)\exp(3.21\times10^{64}) to exp(1.59×1040)\exp(1.59\times10^{40}). Finally, new numerical evidence is presented for the conjecture that M(x)/x=Ω±(logloglogx)M(x)/\sqrt{x}=\Omega_{\pm}(\sqrt{\log\log\log x}).
Article
Full-text available
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of theL 3-algorithm of Lenstra, Lenstra, Lovsz (1982). We present a variant of theL 3-algorithm with deep insertions and a practical algorithm for block Korkin—Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
Article
Full-text available
This paper presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved — with no significant slowdown with respect to other arbitrary-precision tools — and discuss a few applications where such a library can be useful. Categories and Subject Descriptors: D.3.0 [Programming Languages]: General—Standards; G.1.0 [Numerical Analysis]: General—computer arithmetic, multiple precision arithmetic; G.1.2 [Numerical Analysis]: Approximation—elementary and special function approximation; G 4 [Mathematics of Computing]: Mathematical Software—algorithm design, efficiency, portability
Article
Full-text available
In this paper we present a polynomial-time algorithm to solve the following problem: given a non-zero polynomial fe Q(X) in one variable with rational coefficients, find the decomposition of f into irreducible factors in Q(X). It is well known that this is equivalent to factoring primitive polynomials feZ(X) into irreducible factors in Z(X). Here we call f~ Z(X) primitive if the greatest common divisor of its coefficients (the content of f) is 1. Our algorithm performs well in practice, cf. (8). Its running time, measured in bit operations, is O(nl2+n9(log(fD3).
Article
Full-text available
We describe a numerical experiment concerning the order of magnitude of {\small % q(x):=M\left( x\right) /\sqrt{x}}, where {\small M(x)} is the Mertens function (the summatory function of the Möbius function). It is known that, if the Riemann hypothesis is true and all nontrivial zeros of the Riemann zeta-function are simple, {\small q(x)} can be approximated by a series of trigonometric functions of {\small logx\log x}. We try to obtain an {\small Ω\Omega }-estimate of the order of {\small q(x)} by searching for increasingly large extrema of the sum of the first {\small 10210^{2}, 10410^{4}}, and {\small 10610^{6}} terms of this series. Based on the extrema found in the range {\small 104x10101010^{4}\leq x\leq 10^{10^{10}}} we conjecture that {\small q(x)=Ω±(logloglogx)q(x)=\Omega _{\pm }(\sqrt{\log \log \log x})}.
Article
Full-text available
The Mertens conjecture states that M(x) < x 1 /2 for all x > 1, where M(x) = n x S (n) , and (n) is the Mo .. bius function. This conjecture has attracted a substantial amount of interest in its almost 100 years of existence because its truth was known to imply the truth of the Riemann hypothesis. This paper disproves the Mertens conjecture by showing that x lim sup M(x) x - 1 /2 > 1.06 . The disproof relies on extensive computations with the zeros of the zeta function, and does not provide an explicit counterexample. Disproof of the Mertens Conjecture A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey 07974 USA and H. J. J. te Riele Centre for Mathematics and Computer Science Kruislaan 413 1098 SJ Amsterdam The Netherlands 1. Introduction Let (n) denote the Mo .. bius function, so that (n) = 0 , p 2 n for some prime p , ( - 1) k , n = i = 1 P k p i , p i distinct primes , 1 , n = 1 , and let M(x) = nx S (n) . (1.1) Then M(x) is the dif...
Chapter
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is O(nω(C+n)1+ε)O(n^{\omega }(C+n)^{1+\varepsilon }) for lattices of dimension n, ω(2,3]\omega \in (2,3] bounding the cost of size reduction, matrix multiplication, and QR factorization, and C bounding the log of the condition number of the input basis B. This yields a running time of O(nω(p+n)1+ε)O\left( n^\omega (p + n)^{1 + \varepsilon }\right) for precision p=O(logBmax)p = O(\log \Vert B\Vert _{max}) in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Chapter
Lattice-based cryptography relies on generating random bases which are difficult to fully reduce. Given a lattice basis (such as the private basis for a cryptosystem), all other bases are related by multiplication by matrices in GL(n,Z)GL(n,\mathbb {Z}). We compare the strengths of various methods to sample random elements of GL(n,Z)GL(n,\mathbb {Z}), finding some are stronger than others with respect to the problem of recognizing rotations of the Zn\mathbb {Z}^n lattice. In particular, the standard algorithm of multiplying unipotent generators together (as implemented in Magma’s RandomSLnZ command) generates instances of this last problem which can be efficiently broken, even in dimensions nearing 1,500. Likewise, we find that the random basis generation method in one of the NIST Post-Quantum Cryptography competition submissions (DRS) generates instances which can be efficiently broken, even at its 256-bit security settings. Other random basis generation algorithms (some older, some newer) are described which appear to be much stronger.
Article
The Mertens function is defined as M(x)=nxμ(n)M(x) = \sum_{n \leq x} \mu(n), where μ(n)\mu(n) is the M\"obius function. The Mertens conjecture states M(x)/x<1|M(x)/\sqrt{x}| < 1 for x>1x > 1, which was proven false in 1985 by showing lim infM(x)/x<1.009\liminf M(x)/\sqrt{x} < -1.009 and lim supM(x)/x>1.06\limsup M(x)/\sqrt{x} > 1.06. The same techniques used were revisited here with present day hardware and algorithms, giving improved lower and upper bounds of 1.837625-1.837625 and 1.826054. In addition, M(x) was computed for all x1016x \leq 10^{16}, recording all extrema, all zeros, and 10810^8 values sampled at a regular interval. Lastly, an algorithm to compute M(x) in O(x2/3+ε)O(x^{2/3+\varepsilon}) time was used on all powers of two up to 2732^{73}.
Conference Paper
In this paper, we investigate a variant of the BKZ algorithm, called progressive BKZ, which performs BKZ reductions by starting with a small blocksize and gradually switching to larger blocks as the process continues. We discuss techniques to accelerate the speed of the progressive BKZ algorithm by optimizing the following parameters: blocksize, searching radius and probability for pruning of the local enumeration algorithm, and the constant in the geometric series assumption (GSA). We then propose a simulator for predicting the length of the Gram-Schmidt basis obtained from the BKZ reduction. We also present a model for estimating the computational cost of the proposed progressive BKZ by considering the efficient implementation of the local enumeration algorithm and the LLL algorithm. Finally, we compare the cost of the proposed progressive BKZ with that of other algorithms using instances from the Darmstadt SVP Challenge. The proposed algorithm is approximately 50 times faster than BKZ 2.0 (proposed by Chen-Nguyen) for solving the SVP Challenge up to 160 dimensions.
Thesis
Bien que relativement récente, la cryptographie à base de réseaux euclidienss’est distinguée sur de nombreux points, que ce soit par la richesse des constructionsqu’elle permet, par sa résistance supposée à l’avènement des ordinateurs quantiquesou par la rapidité dont elle fait preuve lorsqu’instanciée sur certaines classes deréseaux.Un des outils les plus puissants de la cryptographie sur les réseaux est le Gaussiansampling. À très haut niveau, il permet de prouver qu’on connaît une base particulièred’un réseau, et ce sans dévoiler la moindre information sur cette base. Il permet deréaliser une grande variété de cryptosystèmes. De manière quelque peu surprenante,on dispose de peu d’instanciations pratiques de ces schémas cryptographiques, et lesalgorithmes permettant d’effectuer du Gaussian sampling sont peu étudiés.Le but de cette thèse est de combler le fossé qui existe entre la théorie et lapratique du Gaussian sampling. Dans un premier temps, nous étudions et amélioronsles algorithmes existants, à la fois par une analyse statistique et une approchegéométrique. Puis nous exploitons les structures sous-tendant de nombreuses classesde réseaux, ce qui nous permet d’appliquer à un algorithme de Gaussian sampling lesidées de la transformée de Fourier rapide, passant ainsi d’une complexité quadratiqueà quasilinéaire.Enfin, nous utilisons le Gaussian sampling en pratique et instancions un schémade signature et un schéma de chiffrement basé sur l’identité. Le premier fournit dessignatures qui sont les plus compactes obtenues avec les réseaux à l’heure actuelle,et le deuxième permet de chiffrer et de déchiffrer à une vitesse près de mille foissupérieure à celle obtenue en utilisant un schéma à base de couplages sur les courbeselliptiques.
Conference Paper
The Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL or L3) is a very popular tool in public-key cryptanalysis and in many other fields. Given an integer d-dimensional lattice basis with vectors of norm less thanB in an n-dimensional space, L3 outputs a so-called L3-reduced basis in polynomial time O(d 5 n log3 B), using arithmetic operations on integers of bit-length O(d log B). This worst-case complexity is problematic for lattices arising in cryptanalysis where d or/and log B are often large. As a result, the original L3 is almost never used in practice. Instead, one applies floating-point variants of L3, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in L3) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point L3 is not even guaranteed to terminate, and the output basis may not be L3-reduced at all. In this article, we introduce the L2algorithm, a new and natural floating-point variant of L3 which provably outputs L3-reduced bases in polynomial time O(d 4 n (d + log B) log B). This is the first L3 algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log B, like the well-known Euclidean and Gaussian algorithms, which it generalizes. KeywordsLLL-L3 -Lattice Reduction-Public-Key Cryptanalysis
Article
Answering a question of Vera Sós, we show how Lovász’ lattice reduction can be used to find a point of a given lattice, nearest within a factor ofc d (c = const.) to a given point in R d . We prove that each of two straightforward fast heuristic procedures achieves this goal when applied to a lattice given by a Lovász-reduced basis. The verification of one of them requires proving a geometric feature of Lovász-reduced bases: ac 1 d lower bound on the angle between any member of the basis and the hyperplane generated by the other members, wherec 1 = √2/3. As an application, we obtain a solution to the nonhomogeneous simultaneous diophantine approximation problem, optimal within a factor ofC d . In another application, we improve the Grötschel-Lovász-Schrijver version of H. W. Lenstra’s integer linear programming algorithm. The algorithms, when applied to rational input vectors, run in polynomial time.
Conference Paper
Despite their popularity, lattice reduction algorithms remain mysterious in many ways. It has been widely reported that they behave much more nicely than what was expected from the worst-case proved bounds, both in terms of the running time and the output quality. In this article, we investigate this puzzling statement by trying to model the average case of lattice reduction algorithms, starting with the celebrated Lenstra-Lenstra-Lovasz algorithm (L3). We discuss what is meant by lattice reduction on the average, and we present extensive experiments on the average case behavior of L3, in order to give a clearer picture of the differences/similarities between the average and worst cases. Our work is intended to clarify the practical behavior of L3 and to raise theoretical questions on its average behavior.
Conference Paper
The best lattice reduction algorithm known in practice for high dimension is Schnorr-Euchner’s BKZ: all security estimates of lattice cryptosystems are based on NTL’s old implementation of BKZ. However, recent progress on lattice enumeration suggests that BKZ and its NTL implementation are no longer optimal, but the precise impact on security estimates was unclear. We assess this impact thanks to extensive experiments with BKZ 2.0, the first state-of-the-art implementation of BKZ incorporating recent improvements, such as Gama-Nguyen-Regev pruning. We propose an efficient simulation algorithm to model the behaviour of BKZ in high dimension with high blocksize ≥50, which can predict approximately both the output quality and the running time, thereby revising lattice security estimates. For instance, our simulation suggests that the smallest NTRUSign parameter set, which was claimed to provide at least 93-bit security against key-recovery lattice attacks, actually offers at most 65-bit security.
The GNU Multiple Precision Arithmetic Library
  • T Granlund
  • Development Team
  • Gnu Mp