Article

# CRYSTALS-Dilithium: A Lattice-Based Digital Signature Scheme

Authors:
• CryptoExperts
To read the full-text of this research, you can request a copy directly from the authors.

## Abstract

In this paper, we present the lattice-based signature scheme Dilithium, which is a component of the CRYSTALS (Cryptographic Suite for Algebraic Lattices) suite that was submitted to NIST’s call for post-quantum cryptographic standards. The design of the scheme avoids all uses of discrete Gaussian sampling and is easily implementable in constant-time. For the same security levels, our scheme has a public key that is 2.5X smaller than the previously most efficient lattice-based schemes that did not use Gaussians, while having essentially the same signature size. In addition to the new design, we significantly improve the running time of the main component of many lattice-based constructions – the number theoretic transform. Our AVX2-based implementation results in a speed-up of roughly a factor of 2 over the previously best algorithms that appear in the literature. The techniques for obtaining this speed-up also have applications to other lattice-based schemes.

## No full-text available

... Although hardware solutions provide significant performance over software or HW/SW co-design techniques, the implementations violate the flexibility needed for evolving lattice cryptosystems such as the ones being reviewed for the ongoing US post-quantum encryption standardization [27]. Indeed, flexibility is necessary to evaluate multiple candidates submitted to the NIST standard-while the CRYSTALS-Dilithium [4] algorithm operates with polynomials of degree 255 with 23bit coefficients, NewHope [1] uses polynomials of degree 1023 with 14-bit coefficients. Even the hardware specially designed to cater to multiple algorithms [13] can fail in flexibility: modulo prime number q update in NIST Round-2 of CRYSTALS-Kyber [28] requires taping out a new chip as it cannot be implemented with the earlier design [13]. ...
... The results validate the efficiency and flexibility of our approach. We implement the NTT of the latest NIST Round-2 post-quantum standard candidates: NewHope [1], qTESLA [2], CRYSTALS-Kyber [3] (and its Round-1 version), CRYSTALS-Dilithium [4], and Falcon [5]. On our proposed architecture, implementing actually becomes an automatic compilation of the reference software from the C language. ...
... BOOM has a high-performance architecture, which supports complex out-of-order execution, branch-prediction and speculative execution and uses 64-bit base integer, integer multiplication and division, atomic, and single-and doubleprecision floating-point instructions [34]. We compiled the reference C codes submitted to the NIST post-quantum standardization using NTT-NewHope [1], qTESLA [2], CRYSTALS-Kyber [3], CRYSTALS-Dilithium [4], and Falcon [5]-with riscv-gnu tool-chains referenced in the corresponding git repository for picoRV32 with our extensions [33] and for BOOM [34]. ...
... Finally, [28] presented the develop and use of a framework for running experiments in TLS cheaply by emulating network conditions using the networking features of the Linux kernel. They considered one key exchange algorithm (ECDH NIST P-256 [29]), three key encapsulation mechanisms (SIKE [20], CRYSTALS-Kyber [24], FrodoKEM [7]) and four digital signature schemes (ECDSA NIST P-256 [30], CRYSTALS-Dilithium [31], qTESLA [32], Picnic [33]). ...
... All implementations were in their version for the NIST second round standardization process. For authentication, we select CRYSTALS-Dilithium [31]. New Hope [9] is taken as the key exchange to share secret information. ...
... The first protocol is used to establish the shared secret keying material and takes place with NewHope [9]. Then, certificate-based mutual authentication is performed with CRYSTALS-Dilithium [31]. When client and server entities know each other they are authentic, and AES [34] is executed during the record protocol. ...
Article
Full-text available
In recent years, there has been a notable amount of research on developing cryptographic schemes that are secure against both quantum and classical computers. In 2016, the National Institute of Standards and Technology (NIST) initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public key cryptographic schemes. This process originated because quantum computers can exploit quantum mechanical phenomena and solve mathematical problems that are difficult or intractable for classical computers. This kind of mathematical problem is the basis of secure public key cryptography. As a consequence, in a near future quantum computers will be able to break many of the public key schemes currently in use. However, the challenge is especially acute for devices with different architectures. They might not be well equipped to run the new standards and interoperate with existing communication protocols and networks. In this work, we analyze the performance of postquantum schemes in the transport layer security (TLS) protocol considering x86 as the server architecture and x86/ARM architectures as clients. All of them lack cloud computing or virtualized environments. Our analysis considers integrating the implementation of two cryptographic schemes that were successful in the second round of the postquantum standardization process, namely, Dilithium and New Hope. The performance of postquantum schemes in the TLS protocol is statistically analyzed in x86 and ARM architectures, giving the relationships, the effects and the survival of the analysis.
... FrodeKEM [5], NewHope [6], CRYSTALS-Kyber [7], the learning with rounding (LWR)-based schemes Round5 [8] and SABER [9], and NTRU-based schemes [10,11]. For digital signatures, LWE-based schemes TESLA [12], CRYSTALS-Dilithium [13][14][15], and the NTRU-based scheme FALCON [16] are the only lattice-based schemes. In July 2020, the third-round finalists of the NIST PQC were announced [17]. ...
... The Module-LWE (MLWE)-based signature scheme CRYSTALS-Dilithium [13][14][15] (hereinafter, referred to as Dilithium) is also among the most promising candidates due to its efficiency, especially on its public key size. Dilithium decreases the size of the public key by separating the Copyright © 2021 The Institute of Electronics, Information and Communication Engineers high/low order bits of the element of the LWE sample. ...
... First, we provide a full proof for the tight security reduction for MLWRSign in the Quantum random-oracle model (QROM) from the MLWR problem and another non-interactive assumption, based on the framework give in [15]. Second, we present the additional parameter sets that 192-and 256-bits security, while neither the preliminary version [1] nor the Dilithium [13][14][15] provides the corresponding parameter sets. Third, we give an optimized implementation of MLWRSign for CPUs that supports the AVX2 instruction set, and the results show that CPU cycles of AVX2 optimized versions of MLWRSign achieves 1.41x-1.80x ...
Article
We propose a lattice-based digital signature scheme MLWRSign by modifying Dilithium, which is one of the third-Round finalists of NIST's call for post-quantum cryptographic standards. To the best of our knowledge, our scheme MLWRSign is the first signature scheme whose security is based on the (module) learning with rounding (LWR) problem. Due to the simplicity of the LWR, the secret key size is reduced by approximately 30% in our scheme compared to Dilithium, while achieving the same level of security. Moreover, we implemented MLWRSign and observed that the running time of MLWRSign is comparable to that of Dilithium.
... These schemes offer varying key sizes under varying performance figures, but lattice-based schemes have comparatively compact keys and exhibit better performance. Five out of seven finalists belong to the lattice category; two of the lattice schemes are digital signatures with Dilithium [13] being one of them. It belongs to the CRYSTALS family having another finalist KYBER which is a KEM. ...
... The Cryptographic Suite for Algebraic Lattices (CRYSTALS) consists of two cryptographic schemes, Kyber [41], a KEM and Dilithium [13], a digital signature algorithm. The suite has been submitted to NIST PQC competition by the Crystals team and both the CRYSTALS are among the Round 3 finalists. ...
... We shall just briefly explain the key generation, signing and verification algorithms of Dilithium scheme. We refer the reader to the original specifications for details [13]. ...
Preprint
Full-text available
Motivated by the rise of quantum computers, existing public-key cryptosystems are expected to be replaced by post-quantum schemes in the next decade in billions of devices. To facilitate the transition, NIST is running a standardization process which is currently in its final Round. Only three digital signature schemes are left in the competition, among which Dilithium and Falcon are the ones based on lattices. Classical fault attacks on signature schemes make use of pairs of faulty and correct signatures to recover the secret key which only works on deterministic schemes. To counter such attacks, Dilithium offers a randomized version which makes each signature unique, even when signing identical messages. In this work, we introduce a novel Signature Correction Attack which not only applies to the deterministic version but also to the randomized version of Dilithium and is effective even on constant-time implementations using AVX2 instructions. The Signature Correction Attack exploits the mathematical structure of Dilithium to recover the secret key bits by using faulty signatures and the public-key. It can work for any fault mechanism which can induce single bit-flips. For demonstration, we are using Rowhammer induced faults. Thus, our attack does not require any physical access or special privileges, and hence could be also implemented on shared cloud servers. We perform a thorough classical and quantum security analysis of Dilithium and successfully recover 1,851 bits out of 3,072 bits of secret key $s_1$ for security level 2. The lattice strength against quantum attackers is reduced from $2^{128}$ to $2^{81}$ while the strength against classical attackers is reduced from $2^{141}$ to $2^{89}$. Hence, the Signature Correction Attack may be employed to achieve a practical attack on Dilithium (security level 2) as proposed in Round 3 of the NIST post-quantum standardization process.
... For example, BLISS was not submitted to the NIST post-quantum standardization effort partly due to those concerns. Besides, the second round candidate Dilithium [Duc+18], which can be seen as a direct successor of BLISS, replaces Gaussian distributions by uniform ones, at the cost of larger parameters and a less efficient implementation, specifically citing implementation issues as their justification. ...
... The Dilithium performance numbers are, for the fastest parameter set available in SUPER-COP, namely the dilithium2 implementation, which corresponds to "medium" security parameters in [Duc+18] (no implementation is provided for the "weak" parameters). Timings both for the portable C (ref) and AVX2 platform-specific (avx2) implementations are given in Table 2.2. ...
... The qTESLA signature scheme is also a Fiat-Shamir lattice-based signature derived from the original work of Lyubashevsky [Lyu12]. This signature is, with Dilithium [Duc+18], one of the most recent iterations of this line of research. We slightly modify the signature and parameters to ease the addition of the countermeasure while keeping the original security. ...
Thesis
Lattice-based cryptography is considered as a quantum-safe alternative for the replacement of currently deployed schemes based on RSA and discrete logarithm on prime fields or elliptic curves. It offers strong theoretical security guarantees, a large array of achievable primitives, and a competitive level of efficiency. Nowadays, in the context of the NIST post-quantum standardization process, future standards may ultimately be chosen and several new lattice-based schemes are high-profile candidates. The cryptographic research has been encouraged to analyze lattice-based cryptosystems, with a particular focus on practical aspects. This thesis is rooted in this effort.In addition to black-box cryptanalysis with classical computing resources, we investigate the extended security of these new lattice-based cryptosystems, employing a broad spectrum of attack models, e.g. quantum, misuse, timing or physical attacks. Accounting that these models have already been applied to a large variety of pre-quantum asymmetric and symmetric schemes before, we concentrate our efforts on leveraging and addressing the new features introduced by lattice structures. Our contribution is twofold: defensive, i.e. countermeasures for implementations of lattice-based schemes and offensive, i.e. cryptanalysis.On the defensive side, in view of the numerous recent timing and physical attacks, we wear our designer’s hat and investigate algorithmic protections. We introduce some new algorithmic and mathematical tools to construct provable algorithmic countermeasures in order to systematically prevent all timing and physical attacks. We thus participate in the actual provable protection of the GLP, BLISS, qTesla and Falcon lattice-based signatures schemes.On the offensive side, we estimate the applicability and complexity of novel attacks leveraging the lack of perfect correctness introduced in certain lattice-based encryption schemes to improve their performance. We show that such a compromise may enable decryption failures attacks in a misuse or quantum model. We finally introduce an algorithmic cryptanalysis tool that assesses the security of the mathematical problem underlying lattice-based schemes when partial knowledge of the secret is available. The usefulness of this new framework is demonstrated with the improvement and automation of several known classical, decryption-failure, and side-channel attacks.
... Crystals-Dilithium is a lattice-based algorithm that employs the hardness of the Learning With Error (LWE) problem [12]. In addition, compared to other digital-signature algorithms, its key generation performance, signature generation, and signature verification are uniformly distributed. ...
... Crystals-Dilithium is one of the most promising digital-signature algorithm candidates for the NIST PQC conference's final algorithm. Crystals-Dilithium is based on the difficulty of the Module Learning with Error problem and shares basic characteristics and structure with Crystals-Kyber [12,13]. Crystals-Dilithium employs Fiat-Shamir with an abort method and borrows Module-LWE; as a result, it provides a higher level of security than other ring-LWE-based ciphers. ...
Article
Full-text available
Crystals-Dilithium is one of the digital-signature algorithms in NIST’s ongoing post-quantum cryptography (PQC) standardization final round. Security and computational efficiency concerning software and hardware implementations are the primary criteria for PQC standardization. Many studies were conducted to efficiently apply Dilithium in various environments; however, they are focused on traditionally used PC and 32-bit Advanced RISC Machine (ARM) processors (Cortex-M4). ARMv8-based processors are more advanced embedded microcontrollers (MCUs) and have been widely used for various IoT devices, edge computing devices, and On-Board Units in autonomous driving cars. In this study, we present an efficient Crystals-Dilithium implementation on ARMv8-based MCU. To enhance Dilithium’s performance, we optimize number theoretic transform (NTT)-based polynomial multiplication, the core operation of Dilithium, by leveraging ARMv8’s architectural properties such as large register sets and NEON engine. We apply task parallelism to NTT-based polynomial multiplication using the NEON engine. In addition, we reduced the number of memory accesses during NTT-based polynomial multiplication with the proposed merging and register-holding techniques. Finally, we present an interleaved NTT-based multiplication simultaneously executed with ARM processor and NEON engine. This implementation can further optimize performance by eliminating the ARM processor latency with NEON overheads. Through the proposed optimization methods, for Dilithium 3, we achieved a performance improvement of about 43.83% in key pair generation, 113.25% in signing, and 41.92% in verification compared to the reference implementation submitted to the final round of the NIST PQC competition.
... Lattice-based cryptography has already gained great interest and it forms the mathematical basis for many different applications such as post-quantum key-encapsulation mechanisms (KEMs), post-quantum signature protocols and homomorphic encryption [1], [2]. National Institute of Standards and Technology (NIST) has started a post-quantum cryptography standardization process in 2016 and many lattice-based postquantum schemes are proposed since then. ...
... Schoolbook polynomial multiplication method is inefficient for implementing polynomial multiplication operations and it has O(n 2 ) complexity. Number theoretic transform (NTT) reduces O(n 2 ) complexity to quasi-linear complexity and, therefore, it is utilized in many lattice-based cryptosystems suffering from high complexity of polynomial arithmetic [1], [2], [3], [4], [5]. There are many works in the literature targeting efficient implementations of main arithmetic blocks of the post-quantum cryptosystems for software [6], [7], [8] and hardware [9], [10], [11], [12], [13], [14] platforms. ...
Conference Paper
Full-text available
Polynomial multiplication is one of the most time-consuming operations utilized in lattice-based post-quantum cryptography (PQC) schemes. CRYSTALS-KYBER is a lattice-based key encapsulation mechanism (KEM) and it was recently announced as one of the four finalists at round three in NIST's PQC Standardization. Therefore, efficient implementations of polynomial multiplication operation are crucial for high-performance CRYSTALS-KYBER applications. In this paper, we propose three different hardware architectures (lightweight, balanced, high-performance) that implement the NTT, Inverse NTT (INTT) and polynomial multiplication operations for the CRYSTALS-KYBER scheme. The proposed architectures include a unified butterfly structure for optimizing polynomial multiplication and can be utilized for accelerating the key generation, encryption and decryption operations of CRYSTALS-KYBER. Our high-performance hardware with 16 butterfly units shows up to 112×, 132× and 109× improved performance for NTT, INTT and polynomial multiplication, respectively, compared to the high-speed software implementations on Cortex-M4.
... In this work, the proposed architecture can perform NTT, INTT, and NTT-based polynomial multiplication operations for NTT-friendly PQC schemes, namely CRYSTALS-KYBER (Kyber) with old and new parameters [25], [26], NewHope-512/1024 [27], CRYSTALS-DILITHIUM (Dilithium) [28], Falcon-I/II [29], and qTESLA-q-I [30]. Besides, the proposed multiplier can also be used for lattice-based schemes with ring degrees ranging from 256 to 1024 and NTT-friendly coefficients up to 30 bits. ...
... As the progress in the construction of quantum computers has accelerated in recent years, PQC has raised interest in academia as well as industry and different PQC schemes have been proposed based on cryptographic constructions such as lattice-based cryptography [28], [27], [30] and codebased cryptography [33]. Lattice-based cryptography schemes provide security even under worst-case scenarios, and they are claimed to be more efficient, simple, and parallelizable than other schemes [1]. ...
Preprint
Full-text available
In this paper, we introduce a configurable hardware architecture that can be used to generate unified and parametric NTT-based polynomial multipliers that support a wide range of parameters of lattice-based cryptographic schemes proposed for post-quantum cryptography. Both NTT and inverse NTT operations can be performed using the unified butterfly unit of our architecture, which constitutes the core building block in NTT operations. The multitude of this unit plays an essential role in achieving the performance goals of a specific application area or platform. To this end, the architecture takes the size of butterfly units as input and generates an efficient NTT-based polynomial multiplier hardware to achieve the desired throughput and area requirements. More specifically, the proposed hardware architecture provides run-time configurability for the scheme parameters and compile-time configurability for throughput and area requirements. This work presents the first architecture with both run-time and compile-time configurability for NTT-based polynomial multiplication operations to the best of our knowledge. The implementation results indicate that the advanced configurability has a negligible impact on the time and area of the proposed architecture and that its performance is on par with the state-of-the-art implementations in the literature, if not better. The proposed architecture comprises various sub-blocks such as modular multiplier and butterfly units, each of which can be of interest on its own for accelerating lattice-based cryptography. Thus, we provide the design rationale of each sub-block and compare it with those in the literature, including our earlier works in terms of configurability and performance.
... where M is a small integer. The Barrett This modulus is used in the Dilithium signature scheme [50]. In [78] an adaptation of the Barrett algorithm is described. ...
... There have also been made efforts to make the sampling algorithm run in constant time [69][70], but their propositions are less efficient and need more pseudorandom bits per sample than other sampling methods. The difficulties encountered with constant time Gaussian sampling has motivated some schemes to use noise distributions that are easier to sample, such as the binomial distribution (Kyber [27], NewHope [4]) or the uniform distribution (ring-Tesla [3], Dilithium [50], Saber [45]). ...
Thesis
Full-text available
Shor’s quantum algorithm can be used to efficiently solve the integer factorisation problem and the discrete logarithm in certain groups. The security of the most commonly used public key cryptographic protocols relies on the conjectured hardness of exactly these mathematical problems. Post quantum cryptography relies on mathematical problems that are computationally hard for quantum computers, such as Learning with Errors (LWE) and its variants RLWE and MLWE. In this thesis, we present and compare FPGA implementations using HLS of LWE, RLWEand MLWE based public-key encryption algorithms. We discuss various trade-offs between security, computation time and hardware cost.The implementations are parallelized in order to obtain maximal speed-up. We also discuss hardware security and propose countermeasures against side channel attacks. We consider countermeasures from the state of the art, such as masking, and propose improvements to these algorithms. Moreover, we propose new countermeasures based on redundant number representation and random shuffling of operations. All our countermeasures are implemented and evaluated on FPGA to compare their cost and performance.
... The majority of practical LWE-based cryptosystems are derived from its variants such as ring LWE [196], module LWE [182], cyclic LWE [135], continuous LWE [58], middle-product LWE [250], group LWE [116], entropic LWE [56] and polynomial-ring LWE [272]. Many cryptosystems have been constructed whose security can be proved under the hardness of the LWE problem, including (identity-based, attribute-based, leakage-resilient, fully homomorphic, functional, public-key/key-encapsulation) encryption [11,164,288,248,123,8,196,2,54,127,87,35,45,46,47,51,105,48,190,193], oblivious transfer [232,50,240], (blind) signatures [123,194,254,195,10,98,113], pseudorandom functions with special algebraic properties [22,44,21,20,55,261,43,53,61,165,166,241], hash functions [162,230], secure matrix multiplication [99,286], classically verifiable quantum computation [198], noninteractive zero-knowledge proof system for (any) NP language [231], obfuscation [185,122,137,56,12,81], multilinear maps [119,122,74], lossy-trapdoor functions [31,233,290], and many more [229]. ...
Preprint
Secret sharing allows a dealer to distribute a secret among several parties such that only authorized subsets of parties, specified by a (monotone) access structure, can reconstruct the secret. Recently, Sehrawat and Desmedt (COCOON 2020) introduced hidden access structures, that remain secret until some authorized subset of parties collaborate. However, that scheme assumes semi-honest parties and only supports restricted access structures. We address these shortcomings by constructing a novel access structure hiding verifiable secret sharing scheme, that supports all monotone access structures. Our scheme is the first verifiable secret sharing scheme that guarantees verifiability even when a majority of the parties are malicious. As the building blocks of our scheme, we introduce and construct: (i) a set-system $\mathcal{H}$ with greater than $\exp\left(c\frac{2(\log h)^2}{(\log\log h)}\right)+2\exp\left(c\frac{(\log h)^2}{(\log\log h)}\right)$ subsets of a set of $h$ elements. It is defined over $\mathbb{Z}_m$, where $m$ is a non-prime-power such that the size of each set in $\mathcal{H}$ is divisible by $m$ but the sizes of their pairwise intersections are not, unless one set is a subset of another, (ii) a new variant of the learning with errors (LWE) problem, called PRIM-LWE, wherein the secret matrix can be sampled such that its determinant is a generator of $\mathbb{Z}_q^*$, where $q$ is the LWE modulus. Our scheme relies on the hardness of LWE and its maximum share size for $\ell$ parties is $(1+ o(1)) \dfrac{2^{\ell}}{\sqrt{\pi \ell/2}}(2 q^{\varrho + 0.5} + \sqrt{q} + \Theta(h))$, where $q$ is the LWE modulus and $\varrho \leq 1$ is a constant. We also discuss directions for future work to reduce the share size to: $\leq \dfrac{1}{3} \left( (1+ o(1)) \dfrac{2^{\ell}}{\sqrt{\pi \ell/2}}(2 q^{\varrho + 0.5} + 2\sqrt{q}) \right).$
... A homomorphic encryption scheme is typically based on an RLWE problem over a ring R/qR, where R = Z[X ]/(f (X )) and f (X ) is a monic irreducible polynomial over Z. If f (x) is a cyclotomic polynomial whose degree is a power of two, then R/qR is called a powerof-two cyclotomic ring, and such rings are widely used in RLWE-based schemes due to their merits in efficiency and security (e.g., [2], [5], [9], [27], [29]). A plaintext space Z n t and a modulus polynomial f (X ) determine the number of distinct roots in Z t , and hence the number of slots in the HE scheme. ...
Article
Full-text available
The $\mathsf {Rasta}$ cipher, proposed by Dobraunig et al. (CRYPTO 2018), is an HE-friendly cipher enjoying the fewest ANDs per bit and the lowest ANDdepth among the existing ciphers. A novel feature of $\mathsf {Rasta}$ is that its affine layers are freshly and randomly generated for every encryption. In this paper, we propose a new variant of $\mathsf {Rasta}$ , dubbed $\mathsf {Masta}$ . Similarly to $\mathsf {Rasta}$ , $\mathsf {Masta}$ takes as input a (master) secret key and a nonce, and generates a keystream block for each counter. On the other hand, $\mathsf {Masta}$ has two main differences from $\mathsf {Rasta}$ : $\mathsf {Masta}$ uses modular arithmetic to support HE schemes over a non-binary plaintext space, and it uses a smaller number of random bits in the affine layers by defining them with finite field multiplication. In this way, $\mathsf {Masta}$ outperforms $\mathsf {Rasta}$ in a transciphering framework with $\mathsf {BGV}/ \mathsf {FV}$ -style HE schemes. Our implementation shows that $\mathsf {Masta}$ is 505 to 592 times faster in terms of the throughput on the client-side, while 4792 to 6986 times faster on the server-side.
... We provide a detailed analysis of attacks and an EUF-CMA proof for our scheme. Overall the parameters we propose are efficient and comparable in terms of signature size to the Dilithium lattice-based scheme [29]. ...
Thesis
La cryptographie basée sur les codes correcteurs d’erreurs est un des domaines permettant de construire des cryptosystèmes post-quantiques, c’est à dire résistants à l’ordinateur quantique. Contrairement à la factorisation et au logarithme discret,qui sont les deux problèmes les plus utilisés à l’heure actuelle en cryptographie, aucun algorithme n’est connu pour résoudre le problème de décodage de codes correcteurs aléatoires en temps polynomial avec un ordinateur quantique.Dans cette thèse, on se concentre plus particulièrement sur la cryptographie basée sur la métrique rang, dans laquelle on étudie des codes correcteurs munis de la métrique rang plutôt que la métrique de Hamming. Cette métrique présente l’avantage de pouvoir construire des cryptosystèmes avec de plus petites tailles de clés, mais est moins mature que la métrique de Hamming. Nous présentons dans un premier temps deux nouveaux algorithmes de décodage en métrique rang : le premier est un algorithme combinatoire permettant de résoudre le problème de décodage dans le cas de codes aléatoires, et permet donc de mieux estimer la complexité des attaques. Le second est une amélioration de l’algorithme de décodage pour les codes Low Rank Parity Check (LRPC). Nous présentons ensuite deux cryptosystèmes basés sur les codes : un schéma de signature en métrique rang, Durandal, qui est une adaptation de l’approche de Schnorr-Lyubashevsky en métrique euclidienne, et une amélioration du schéma de chiffrement Hamming Quasi-Cyclic (HQC) en métrique de Hamming, pour lequel on propose une nouvelle analyse du taux d’échec de déchiffrement et l’utilisation d’une autre famille de codes correcteurs d’erreurs. Nous nous intéressons ensuite à deux adaptations de l’approche de Schnorr-Lyubashevsky, une en métrique de Hamming et l’autre en métrique rang, pour lesquelles on donne des cryptanalyses permettant de retrouver les clés publiques des schémas en utilisant la fuite d’information dans les signatures. Enfin nous présentons les choix effectués pour implémenter les cryptosystèmes en métrique rang dans la bibliothèque Rank-Based Cryptography (RBC).
... (Refer to Bernstein et al. 2019; Peikert 2019 for recent estimates on post-quantum security of CSIDH and CSI-FiSh.) Since we have several lattice-based signatures, e.g., Ducas et al. (2018), Fouque et al. (2017), Akleylek et al. (2017), we also have lattice-based AGKE from our lattice GKE. ...
Chapter
Full-text available
We revisit a generic compiler from a two-party key exchange (KE) protocol to a group KE (GKE) one by Just and Vaudenay. We then give two families of GKE protocols from static assumptions, which are obtained from the general compiler. The first family of the GKE protocols is a constant-round GKE by using secure key derivation functions (KDFs). As special cases, we have such GKE from static Ring-LWE (R-LWE), where “static” means that the parameter size in the R-LWE does not depend on the number of group members, n, and also from the standard SI-DDH and CSI-DDH assumptions. The second family consists of two-round GKE protocols from isogenies, which are proven secure from new isogeny assumptions, the first (resp. second) of which is based on the SIDH (resp. CSIDH) two-party KE. The underlying new static assumptions are based on indistinguishability between a product value of supersingular invariants and a random value.
... Currently, the NIST competition [21] for the development of post-quantum public-key cryptosystems has entered the final stage [22]. The finalists in the category of post-quantum signatures were Falcon [23] and Crystals-Dilithium [24], and Rainbow [25]. It is interesting to compare the proposed signature scheme with the finalists and with other HDLP-based signatures. ...
Article
Introduction: Development of practical post-quantum signature schemes is a current challenge in the applied cryptography. Recently, several different forms of the hidden discrete logarithm problem were proposed as primitive signature schemes resistant to quantum attacks. Purpose: Development of a new form of the hidden discrete logarithm problem set in finite commutative groups possessing multi-dimensional cyclicity, and a method for designing post-quantum signature schemes. Results: A new form of the hidden discrete logarithm problem is introduced as the base primitive of practical post-quantum digital signature algorithms. Two new four-dimensional finite commutative associative algebras have been proposed as algebraic support for the introduced computationally complex problem. A method for designing signature schemes on the base of the latter problem is developed. The method consists in using a doubled public key and two similar equations for the verification of the same signature. To generate a pair of public keys, two secret minimum generator systems <G, Q> and <H, V> of two different finite groups G<G, Q> and G<H, V> possessing two-dimensional cyclicity are selected at random. The first public key (Y, Z, U) is computed as follows: Y = Gy1Qy2a, Z = Gz1Qz2b, U = Gu1Qu2g, where the set of integers (y1, y2, a, z1, z2, b, u1, u2, g) is a private key. The second public key (Y¢, Z¢, U¢) is computed as follows: Y¢ = Hy1Vy2a, Z¢ = Hz1Vz2b, U¢ = Hu1Vu2g. Using the same parameters to calculate the corresponding elements belonging to different public keys makes it possible to calculate a single signature which satisfies two similar verification equations specified in different finite commutative associative algebras. Practical relevance: Due to a smaller size of the public key, private key and signature, as well as approximately equal performance as compared to the known analogues, the proposed digital signature scheme can be used in the development of post-quantum signature algorithms.
... This inturn makes timing attacks as these that FALCON is vulnerable to impossible by design. [22] VI. CONCLUSION To conclude we can see that there are two NIST finalist with promising performance even for the IoT, some performance improvement for specific platforms are still possible (like Dilithiums avx-2 vector optimizations), but even without those we have performance competitive with more traditional schemes like ECDSA. ...
Preprint
Full-text available
Quantum computers are on the horizon to get to a sufficient size. These will then be able to break most of the encryption and signature schemes currently in use. This is the case for human interface devices as well as for IoT nodes. In this paper i am comparing some signature schemes currently in the process of standardization by the NIST. After explaining the underlying basis on why some schemes are different in some aspects compared to others i will evaluate which currently available implementations are better suited for usage in IoT use-cases. We will come to further focus on the most promising schemes FALCON and Dilithium, which differ in one signifiant aspect that makes FALCON worse for signing but very good for verification purposes.
... Note that this compiler need a secure signature scheme. Fortunately, there are lattice based signature schemes [15][16][17] which are strongly unforgeable under adaptive chosen message attack (EUF-CMA), and it's enough for the compiler. In other words, if there is a lattice based GKE, then we have a lattice based athenticated GKE. ...
Conference Paper
Group key exchange schemes allow group members to agree on a session key. Although there are many works on constructing group key exchange schemes, but most of them are based on algebraic problems which can be solved by quantum algorithms in polynomial time. Even if several works considered lattice based group key exchange schemes, believed to be post-quantum secure, but only in the random oracle model. In this work, we propose a group key exchange scheme based on ring learning with errors problem. On contrast to existing schemes, our scheme is proved to be secure in the standard model. To achieve this, we define and instantiate multi-party key reconciliation mechanism. Furthermore, using known compiler with lattice based signature schemes, we can achieve authenticated group key exchange with postquantum security.
... Dilithium introduces Dilithium 2, Dilithium 3, and Dilithium 5 which correspond to NIST levels 2, 3, and 5 of the post-quantum security categories, respectively [6]. The design of Dilithium is based on the "Fiat-Shamir with Aborts" approach, SHAKE for its hashing algorithm and other security schemes in lattice-based cryptography [16]. For the NIST round 3 submissions, Dilithium also introduced Dilithium 2, Dilithium 3, and Dilithium 5 versions that use AES rather than SHAKE to expand the public key matrix and to show the efficiency of their algorithm as current hardware is more optimized for AES [17]. ...
... The first lattice hard assumption was proposed by [32]. In recent years, various cryptographic primitives were proposed based on the lattice hard assumptions [33][34][35][36][37]. Lohachab et al. [38] presented a comprehensive survey regarding the use of post-quantum cryptography in the field of IoT. ...
Article
With Internet of Things (IoT) growing rapidly, the Internet of Vehicles (IoV) has become an essential part of smart cities and has attracted the full attention of both academic and business communities. Because of the public transmission channel, the security and privacy in IoV have paid serious attention. In IoV, it is crucial to generate a secret session key among the various vehicles and road-side units (RSUs) so that they can share the confidential information over the public Internet. Thus, an authenticated key agreement (AKA) protocol should be needed that can achieve the session key requirement in the IoV for secure communication. For this purpose, various AKA techniques has been designed using a number of different tools. Several existing AKA protocols either suffer from different attacks or inefficient for IoV environment due to its excessive communication and computational costs. Many such traditional schemes have used either Diffie–Hellman(DH) or prime factorization type hard problems. These hard problems are vulnerable to the futuristic technologies like quantum computer. Besides, existing quantum resistance AKA protocols use lattice cryptography for its security. However, these protocols either incurs an overhead of certificate management or have excessive communication and computational costs. Hence, there is a need of quantum resistance AKA protocols which removes the certificate overhead and also efficient for the IoV. In this paper, we propose a lattice-based two-party authenticated key agreement (LB-ID-2PAKA) protocol using identity-based cryptography (IBC). The lattice hard problems could resist the quantum computers and IBC could remove the overhead of certificate management. The security strength of proposed LB-ID-2PAKA protocol is analyzed under the random oracle model to show its robustness against the present as well as future quantum attacks. In addition, the resiliency against different security attacks such as man-in-the-middle (MITM) attack, known-key security (K-KS), unknown key-share (UK-S) attack etc. are also included. Further, the performance analysis shows that the proposed LB-ID-2PAKA protocol outperforms the existing protocols and feasible for IoV applications.
... DILITHIUM is an advanced signature plot that's emphatically secure beneath chosen message assaults based on the hardness of grid issues over module cross-sections. The security idea implies that an enemy having to get to to a marking prophet cannot create a signature of a message whose signature he hasn't however, seen, nor create a distinctive signature of a message that he as of now saw marked [15]. RAINBOW is a new way of calculating cryptanalytic data with variable chain length resulting in reduced operations and lesser time for crashing password hashes [16]. ...
Article
Full-text available
Digital currency is primarily designed on problems that are computationally hard to solve using traditional computing techniques. However, these problems are now vulnerable due to the computational power of quantum computing. For the post-quantum computing era, there is an immense need to re-invent the existing digital security measures. Problems that are computationally hard for any QC will be a possible solution to that. This research summarizes the current security measures and how the new way of hard problems will trigger the future protection of the existing digital currency from the future quantum threat. MOTIVATION: Crypto-currency such as bit-coin, Etherium, and others are taking the world by storm. It is a decentralized digital currency free from any government or institutional interference and can operate in an open, peer-to-peer network. The mining of these coins and transactions
... Then the algorithm A can be used to find the solution of R-SIS. Thus, in lattice-based cryptographic schemes [11], [12], [13], [14], M-SIS is preferred due to the fundamental difficulty as well as the reduced key-size and thus, we do not work on the existence of an algorithm to solve the R-SIS. ...
Article
Full-text available
Many lattice-based cryptographic schemes are constructed based on hard problems on an algebraic structured lattice, such as the short integer solution (SIS) problems. These problems are called ring-SIS (R-SIS) and its generalized version, module-SIS (M-SIS). Generally, it has been considered that problems defined on the module lattice are more difficult than the problems defined on the ideal lattice. However, Koo, No, and Kim showed that R-SIS is more difficult than M-SIS under some norm constraints of R-SIS. However, this reduction has problems in that the rank of the module is limited to about half of the instances of R-SIS, and the comparison is not performed through the same modulus of R-SIS and M-SIS. In this paper, we propose that R-SIS is more difficult than M-SIS with the same modulus and ring dimension under some constraint of R-SIS. Also, we show that R-SIS with the modulus prime q is more difficult than M-SIS with the composite modulus c, such that c is divided by q. In particular, it shows that through the reduction from M-SIS to R-SIS with the same modulus, the rank of the module is extended as much as the number of instances of R-SIS from half of the number of instances of R-SIS. Finally, this paper shows that R-SIS is more difficult than M-SIS under some constraints, which is tighter than the M-SIS in the previous work.
... Then the SIS problem is defined as follows: For any positive integers m, n, given positive β ∈ R, and positive integer q, the SIS problem is to find solution z ∈ Z m such that A · z = 0 mod q and 0 < z ≤ β for uniformly random matrix A ∈ Z n×m q . A collision-resistant hash function can be constructed using the SIS problem and it can be used for signature scheme, identification scheme, and so on [7], [8], [9], [10], [11]. ...
Article
Full-text available
Lattice-based cryptographic scheme is constructed based on hard problems on a lattice such as the short integer solution (SIS) problem and the learning with error (LWE). However, the cryptographic scheme based on SIS or LWE is inefficient since the size of the key is too large. Thus, most cryptographic schemes use the variants of LWE and SIS with ring and module structures. Albrecht and Deo showed that there is a reduction from module-LWE (M-LWE) to ring-LWE (R-LWE) in the polynomial ring by handling the error rate and modulus. However, unlike the LWE problem, the SIS problem does not have an error rate, but there is the upper bound β on the norm of the solution of the SIS problem. In this paper, we propose the two novel reductions related to module-SIS (M-SIS) and ring-SIS (R-SIS) on a polynomial ring. We propose (i) the reduction from R-SISqk,mk,βk to R-SISq,m,β and (ii) the reduction from M-SIS to R-SIS under norm constraint of R-SIS. Combining these two results implies that R-SIS for a specified modulus and number samples is more difficult than M-SIS under norm constraints of R-SIS, which provides the range of possible module ranks for M-SIS. From the reduction we propose, contrary to the widely known belief, our result shows that there is a possibility that the security parameters of M-SIS may be less secure when it reduces to R-SIS for the theoretical reasons presented in this paper. Therefore, when generating parameters on an M-SIS structure, the theoretical security level over R-SIS also should also be checked at the same time.
... Unfortunately, Shor [12] pointed out that the two problems are easily solved by quantum computers, so the signature schemes based on the two problems are no longer secure in the quantum era. Lattice cryptography is one of the postquantum candidate schemes proposed by the National Institute of Standards and Technology [13]. Meanwhile, in recent years, lattice cryptography on the refinements of the security assessment and the fast implementation [14,15] has achieved rapid developments. ...
Article
Full-text available
Digital signatures are crucial network security technologies. However, in traditional public key signature schemes, the certificate management is complicated and the schemes are vulnerable to public key replacement attacks. In order to solve the problems, in this paper, we propose a self-certified signature scheme over lattice. Using the self-certified public key, our scheme allows a user to certify the public key without an extra certificate. It can reduce the communication overhead and computational cost of the signature scheme. Moreover, the lattice helps prevent quantum computing attacks. Then, based on the small integer solution problem, our scheme is provable secure in the random oracle model. Furthermore, compared with the previous self-certified signature schemes, our scheme is more secure.
... On the other hand, blockchain, as a novel distributed consensus scheme, also plays a great role in various fields [9][10][11][12]. Besides, there are also other similar works [14,15,26,28,29]. ...
Article
Full-text available
The electronic reporting system can alleviate the problems in terms of efficiency, content confidentiality, and reporter privacy imposed in the traditional reporting system. Relying on anonymity, the privacy of reporters can be protected, but the authentication of reporters with fake names should also be maintained. If authenticated anonymity is guaranteed, the reporters may still conduct misbehaviors such as submitting fake reports after the authentication. To address the above dilemma, we propose to apply a proxy signature to achieve authenticated anonymity and employ blockchain to maintain anonymity yet guarantee traceability for reporters’ misbehaviors. We also propose a new proxy signature scheme in this paper by module lattice for postquantum security. The extensive analysis justified our proposed scheme is secure and manageable.
Article
Many lattice-based schemes are built from the hardness of the learning with errors problem, which naturally comes in two flavors: the decision LWE and search LWE. In this paper, we investigate the decision LWE and search LWE by Rényi divergence respectively and obtain the following results: For decision LWE, we apply RD on LWE variants with different error distributions (i.e., center binomial distribution and uniform distribution, which are frequently used in the NIST PQC submissions) and prove the pseudorandomness in theory. As a by-product, we extend the so-called public sampleability property and present an adaptively public sampling property to the application of Rényi divergence on more decision problems. As for search LWE, we improve the classical reduction proof from GapSVP to LWE. Besides, as an independent interest, we also explore the intrinsic relation between the decision problem and search problem.
Chapter
We present qTESLA, a post-quantum provably-secure digital signature scheme that exhibits several attractive features such as simplicity, strong security guarantees against quantum adversaries, and built-in protection against certain side-channel and fault attacks. qTESLA—selected for round 2 of NIST’s post-quantum cryptography standardization project—consolidates a series of recent schemes originating in works by Lyubashevsky, and Bai and Galbraith. We provide full-fledged, constant-time portable C implementations consisting of only about 300 lines of C code, which showcases the code compactness of the scheme. Our results also demonstrate that a conservative, provably-secure signature scheme can be efficient and practical, even with a compact and portable implementation. For instance, our C-only implementation executes signing and verification in approximately 0.9 ms on an x64 Intel processor using the proposed level 1 parameter set. Finally, we also provide AVX2-optimized assembly implementations that achieve an additional factor-1.5 speedup.
Chapter
Today’s most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.
Chapter
A canonical identification (CID) scheme is a 3-move protocol consisting of a commitment, challenge, and response. It constitutes the core design of many cryptographic constructions such as zero-knowledge proof systems and various types of signature schemes. Unlike number-theoretic constructions, CID in the lattice setting usually forces provers to abort and repeat the whole authentication process once the distribution of the computed response does not follow a target distribution independent from the secret key. This concept has been realized by means of rejection sampling, which makes sure that the secrets involved in a protocol are concealed after a certain number of repetitions. This however has a negative impact on the efficiency of interactive protocols because it leads to a number of communication rounds that is multiplicative in the number of aborting participants (or rejection sampling procedures). In this work we show how the CID scheme underlying many lattice-based protocols can be designed with smaller number of aborts or even without aborts. Our new technique exploits (unbalanced) binary hash trees and thus significantly reduces the communication complexity. We show how to apply this new method within interactive zero-knowledge proofs. We also present BLAZE $$^{+}$$: a further application of our technique to the recently proposed lattice-based blind signature scheme BLAZE (FC’20). We show that BLAZE $$^{+}$$ has an improved performance and communication complexity compared to BLAZE while preserving the size of keys and signatures.
Chapter
Blind signatures constitute basic cryptographic ingredients for privacy-preserving applications such as anonymous credentials, e-voting, and Bitcoin. Despite the great variety of cryptographic applications blind signatures also found their way in real-world scenarios. Due to the expected progress in cryptanalysis using quantum computers, it remains an important research question to find practical and secure alternatives to current systems based on the hardness of classical security assumptions such as factoring and computing discrete logarithms. In this work we present BLAZE: a new practical blind signature scheme from lattice assumptions. With respect to all relevant efficiency metrics BLAZE is more efficient than all previous blind signature schemes based on assumptions conjectured to withstand quantum computer attacks. For instance, at approximately 128 bits of security signatures are as small as 6.6 KB, which represents an improvement factor of 2.7 compared to all previous candidates, and an expansion factor of 2.5 compared to the NIST PQC submission $$\textsf {Dilithium}$$. Our software implementation demonstrates the efficiency of BLAZE to be deployed in practical applications. In particular, generating a blind signature takes just 18 ms. The running time of both key generation and verification is in the same order as state-of-the-art ordinary signature schemes.
Chapter
A significant concern for the candidate schemes of the NIST postquantum cryptography standardization project is the protection they support against side-channel attacks. One of these candidate schemes currently in the NIST standardization race is the Dilithium signature scheme. This postquantum signature solution has been analyzed for side channel attack resistance especially against timing attacks. Expanding our attention on other types of side-channel analysis, this work is focused on correlation based differential side channel attacks on the polynomial multiplication operation of Dilithium digital signature generation. In this paper, we describe how a Correlation Power Attack should be adapted for the Dilithium signature generation and describe the attack process to be followed. We determine the conditions to be followed in order for such an attack to be feasible, (isolation of polynomial coefficient multiplication inpower traces) and we create a power trace profiling paradigm for the Dilithium signature scheme executed in embedded systems to showcase that the conditions can be met in practice. Expanding the methodology of recent works that mainly use simulations for power trace collection, in this paper, power trace capturing and profiling analysis of the signature generation process was succesfully done on a, noisy, Commercial off-the-shelf ARM Cortex-M4 embedded system.
Chapter
OR-proofs enable a prover to show that it knows the witness for one of many statements, or that one out of many statements is true. OR-proofs are a remarkably versatile tool, used to strengthen security properties, design group and ring signature schemes, and achieve tight security. The common technique to build OR-proofs is based on an approach introduced by Cramer, Damgård, and Schoenmakers (CRYPTO’94), where the prover splits the verifier’s challenge into random shares and computes proofs for each statement in parallel.
Chapter
The multisignature schemes are attracted to utilize in some cryptographic applications such as the blockchain. Though the lattice-based constructions of multisignature schemes exist as quantum-secure multisignature, a multisignature scheme whose security is proven in the quantum random oracle model (QROM), rather than the classical random oracle model (CROM), is not known. In this paper, we propose a first lattice-based multisignature scheme whose security is proven in QROM. The difficultly of proving the security in QROM than CROM is how to program the random oracle in the security proof. Although our proposed scheme is based on the Dilithium-QROM signature whose security is proven in QROM, their proof technique cannot be directly applied to the multisignature setting. To solve the problems in the security proof, we develop several proof techniques in QROM. First, we employ the searching query technique by Targi and Unruh to convert the Dilithium-QROM into the multisignature setting. For the second, we develop a new programming technique in QROM, since the conventional programming techniques seem not to work in the multisignature setting of QROM. We combine the programming technique by Unruh with the one by Liu and Zhandry. The new technique enables us to program the random oracle in QROM and to construct the signing oracle in the security proof.
Chapter
This work describes the Mitaka signature scheme: a new hash-and-sign signature scheme over NTRU lattices which can be seen as a variant of NIST finalist Falcon. It achieves comparable efficiency but is considerably simpler, online/offline, and easier to parallelize and protect against side-channels, thus offering significant advantages from an implementation standpoint. It is also much more versatile in terms of parameter selection. We obtain this signature scheme by replacing the FFO lattice Gaussian sampler in Falcon by the “hybrid” sampler of Ducas and Prest, for which we carry out a detailed and corrected security analysis. In principle, such a change can result in a substantial security loss, but we show that this loss can be largely mitigated using new techniques in key generation that allow us to construct much higher quality lattice trapdoors for the hybrid sampler relatively cheaply. This new approach can also be instantiated on a wide variety of base fields, in contrast with Falcon’s restriction to power-of-two cyclotomics. We also introduce a new lattice Gaussian sampler with the same quality and efficiency, but which is moreover compatible with the integral matrix Gram root technique of Ducas et al., allowing us to avoid floating point arithmetic. This makes it possible to realize the same signature scheme as Mitaka efficiently on platforms with poor support for floating point numbers. Finally, we describe a provably secure masking of Mitaka. More precisely, we introduce novel gadgets that allow provable masking at any order at much lower cost than previous masking techniques for Gaussian sampling-based signature schemes, for cheap and dependable side-channel protection.
Chapter
We propose a new framework for (trapdoor) sampling over lattices. Our framework can be instantiated in a number of ways. It allows for example to sample from uniform, affine and “product affine” distributions. Another salient point of our framework is that the output distributions of our samplers are perfectly indistinguishable from ideal ones, in contrast with classical samplers that are statistically indistinguishable. One caveat of our framework is that all our current instantiations entail a rather large standard deviation.
Chapter
We construct efficient ring signatures (RS) from isogeny and lattice assumptions. Our ring signatures are based on a logarithmic OR proof for group actions. We instantiate this group action by either the CSIDH group action or an MLWE-based group action to obtain our isogeny-based or lattice-based RS scheme, respectively. Even though the OR proof has a binary challenge space and therefore requires a number of repetitions which is linear in the security parameter, the sizes of our ring signatures are small and scale better with the ring size N than previously known post-quantum ring signatures. We also construct linkable ring signatures (LRS) that are almost as efficient as the non-linkable variants. The isogeny-based scheme produces signatures whose size is an order of magnitude smaller than all previously known logarithmic post-quantum ring signatures, but it is relatively slow (e.g. 5.5 KB signatures and 79 s signing time for rings with 8 members). In comparison, the lattice-based construction is much faster, but has larger signatures (e.g. 30 KB signatures and 90 ms signing time for the same ring size). For small ring sizes our lattice-based ring signatures are slightly larger than state-of-the-art schemes, but they are smaller for ring sizes larger than $$N \approx 1024$$.
Chapter
We prove that the module learning with errors ($$\mathrm {M\text {-}LWE}$$) problem with arbitrary polynomial-sized modulus p is classically at least as hard as standard worst-case lattice problems, as long as the module rank d is not smaller than the number field degree n. Previous publications only showed the hardness under quantum reductions. We achieve this result in an analogous manner as in the case of the learning with errors ($$\mathrm {LWE}$$) problem. First, we show the classical hardness of $$\mathrm {M\text {-}LWE}$$ with an exponential-sized modulus. In a second step, we prove the hardness of $$\mathrm {M\text {-}LWE}$$ using a binary secret. And finally, we provide a modulus reduction technique. The complete result applies to the class of power-of-two cyclotomic fields. However, several tools hold for more general classes of number fields and may be of independent interest.
Article
Leakages during the signing process, including partial key exposure and partial (or complete) randomness exposure, may be devastating for the security of digital signatures. In this work, we investigate the security of lattice-based Fiat-Shamir signatures in the presence of randomness leakage. To this end, we present a generic key recovery attack that relies on minimum leakage of randomness, and then theoretically connect it to a variant of Integer-LWE (ILWE) problem. The ILWE problem, introduced by Bootle et al. at Asiacrypt 2018, is to recover the secret vector s given polynomially many samples of the form (a, 〈a〉, s)+e) ϵ ℤ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n+1</sup> , and it is solvable if the error e ϵ ℤ is not superpolynomially larger than the inner product (a, s). However, in our variant (we call the variant FS-ILWE problem in this paper), a ϵ ℤ <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> is a sparse vector whose coefficients are NOT independent any more, and e is related to a and s as well. We prove that the FS-ILWE problem can be solved in polynomial time, and present an efficient algorithm to solve it. Our generic key recovery method directly implies that many lattice-based Fiat-Shamir signatures will be totally broken with one (deterministic or probabilistic) bit of randomness leakage per signature. Our attack has been validated by experiments on two NIST PQC signatures Dilithium and qTESLA. For example, as to Dilithium-III of 125-bit quantum security, the secret key will be recovered within 10 seconds over an ordinary PC desktop, with about one million signatures. Similarly, key recovery attacks on Dilithium under other parameters and qTESLA will be completed within 20 seconds and 31 minutes respectively. In addition, we also present a non-profiled attack to show how to obtain the required randomness bit in practice through power analysis attacks on a proof-of-concept implementation of polynomial addition. The experimental results confirm the practical feasibility of our method.
Book
Full-text available
This open access book presents selected papers from International Symposium on Mathematics, Quantum Theory, and Cryptography (MQC), which was held on September 25-27, 2019 in Fukuoka, Japan. The international symposium MQC addresses the mathematics and quantum theory underlying secure modeling of the post quantum cryptography including e.g. mathematical study of the light-matter interaction models as well as quantum computing. The security of the most widely used RSA cryptosystem is based on the difficulty of factoring large integers. However, in 1994 Shor proposed a quantum polynomial time algorithm for factoring integers, and the RSA cryptosystem is no longer secure in the quantum computing model. This vulnerability has prompted research into post-quantum cryptography using alternative mathematical problems that are secure in the era of quantum computers. In this regard, the National Institute of Standards and Technology (NIST) began to standardize post-quantum cryptography in 2016. This book is suitable for postgraduate students in mathematics and computer science, as well as for experts in industry working on post-quantum cryptography.
Article
Full-text available
Classical cryptographic schemes in use today are based on the difficulty of certain number theoretic problems. Security is guaranteed by the fact that the computational work required to break the core mechanisms of these schemes on a conventional computer is infeasible; however, the difficulty of these problems would not withstand the computational power of a large-scale quantum computer. To this end, the post-quantum cryptography (PQC) standardization process initiated by the National Institute of Standards and Technology (NIST) is well underway. In addition to the evaluation criteria provided by NIST, the energy consumption of these candidate algorithms is also an important criterion to consider due to the use of battery-operated devices, high-performance computing environments where energy costs are critical, as well as in the interest of green computing. In this paper, the energy consumption of PQC candidates is evaluated on an Intel Core i7-6700 CPU using PAPI, the Performance API. The energy measurements are categorized based on their proposed security level and cryptographic functionality. The results are then further subdivided based on the underlying mechanism used in order to identify the most energy-efficient schemes. Lastly, IgProf is used to identify the most energy-consuming subroutines within a select number of submissions to highlight potential areas for optimization.
Article
Lattice-based online/offline signature is attractive for the merit of resisting quantum attacks besides the short online response time. Prior to this work, the hash-sign-switch paradigm lattice-based online/offline signatures usually increase the length of each signature, and the Fiat–Shamir candidates are highly inefficient due to multiple aborts in online signing phase. In this work we mainly address its efficient issue and propose a new paradigm of its construction in the perspective of abort. In this paradigm, one tries to remove one or more aborts from online to offline signing phase by $\Gamma$-transformation. Specifically, this work proposes an efficient lattice-based online/offline signature scheme with fewer online aborts and thus allows the signer to obtain a valid signature by fewer online repetitions. Through this way, the resulting scheme can reduce much online signing time with the same signature size. The performance evaluation shows that our scheme is efficient and practical.
Chapter
This work introduces the first differential side-channel analysis of the Picnic Signature Scheme, an alternate candidate in the ongoing competition for post-quantum cryptography by the National Institute of Standards and Technology (NIST). We present a successful side-channel analysis of the underlying multiparty implementation of the LowMC block cipher (MPC-LowMC) and show how side-channel information can be used to recover the entire secret key by exploiting two different parts of the algorithm. LowMC key recovery then allows to forge signatures for the calling Picnic post-quantum signature scheme. We target the NIST reference implementation executed on a FRDM-K66F development board. Key recovery succeeds with fewer than 1000 LowMC traces, which can be obtained from fewer than 30 observed Picnic signatures.
Preprint
We show polynomial-time quantum algorithms for the following problems: (*) Short integer solution (SIS) problem under the infinity norm, where the public matrix is very wide, the modulus is a polynomially large prime, and the bound of infinity norm is set to be half of the modulus minus a constant. (*) Extrapolated dihedral coset problem (EDCP) with certain parameters. (*) Learning with errors (LWE) problem given LWE-like quantum states with polynomially large moduli and certain error distributions, including bounded uniform distributions and Laplace distributions. The SIS, EDCP, and LWE problems in their standard forms are as hard as solving lattice problems in the worst case. However, the variants that we can solve are not in the parameter regimes known to be as hard as solving worst-case lattice problems. Still, no classical or quantum polynomial-time algorithms were known for those variants. Our algorithms for variants of SIS and EDCP use the existing quantum reductions from those problems to LWE, or more precisely, to the problem of solving LWE given LWE-like quantum states. Our main contributions are introducing a filtering technique and solving LWE given LWE-like quantum states with interesting parameters.
Preprint
A verifiable random function (VRF in short) is a powerful pseudo-random function that provides a non-interactively public verifiable proof for the correctness of its output. Recently, VRFs have found essential applications in blockchain design, such as random beacons and proof-of-stake consensus protocols. To our knowledge, the first generation of blockchain systems used inherently inefficient proof-of-work consensuses, and the research community tried to achieve the same properties by proposing proof-of-stake schemes where resource-intensive proof-of-work is emulated by cryptographic constructions. Unfortunately, those most discussed proof-of-stake consensuses (e.g., Algorand and Ouroborous family) are not future-proof because the building blocks are secure only under the classical hard assumptions; in particular, their designs ignore the advent of quantum computing and its implications. In this paper, we propose a generic compiler to obtain the post-quantum VRF from the simple VRF solution using symmetric-key primitives (e.g., non-interactive zero-knowledge system) with an intrinsic property of quantum-secure. Our novel solution is realized via two efficient zero-knowledge systems ZKBoo and ZKB++, respectively, to validate the compiler correctness. Our proof-of-concept implementation indicates that even today, the overheads introduced by our solution are acceptable in real-world deployments. We also demonstrate potential applications of a quantum-secure VRF, such as quantum-secure decentralized random beacon and lottery-based proof of stake consensus blockchain protocol.
Chapter
We explore a bitwise modification in Ajtai’s one-way function. Our main contribution is to define the higher-bit approximate inhomogeneous short integer solution (ISIS) problem and prove its reduction to the ISIS problem. In this new instance, our main idea is to discard low-weighted bits to gain compactness.
Article
In this paper, we introduce a configurable hardware architecture that can be used to generate unified and parametric NTT-based polynomial multipliers that support a wide range of parameters of lattice-based cryptographic schemes proposed for post-quantum cryptography. Both NTT and inverse NTT operations can be performed using the unified butterfly unit of our architecture, which constitutes the core building block in NTT operations. The multitude of this unit plays an essential role in achieving the performance goals of a specific application area or platform. To this end, the architecture takes the size of butterfly units as input and generates an efficient NTT-based polynomial multiplier hardware to achieve the desired throughput and area requirements. More specifically, the proposed hardware architecture provides run-time configurability for the scheme parameters and compile-time configurability for throughput and area requirements. This work presents the first architecture with both run-time and compile-time configurability for NTT-based polynomial multiplication operations to the best of our knowledge. The implementation results indicate that the advanced configurability has a negligible impact on the time and area of the proposed architecture and that its performance is on par with the state-of-the-art implementations in the literature, if not better. The proposed architecture comprises various sub-blocks such as modular multiplier and butterfly units, each of which can be of interest on its own for accelerating lattice-based cryptography. Thus, we provide the design rationale of each sub-block and compare it with those in the literature, including our earlier works in terms of configurability and performance.
Chapter
Lattice-based encryption schemes are often subject to the possibility of decryption failures, in which valid encryptions are decrypted incorrectly. Such failures, in large number, leak information about the secret key, enabling an attack strategy alternative to pure lattice reduction. Extending the “failure boosting” technique of D’Anvers et al. in PKC 2019, we propose an approach that we call “directional failure boosting” that uses previously found “failing ciphertexts” to accelerate the search for new ones. We analyse in detail the case where the lattice is defined over polynomial ring modules quotiented by $$\langle X^{N} + 1 \rangle$$ and demonstrate it on a simple Mod-LWE-based scheme parametrized à la Kyber768/Saber. We show that for a given secret key (single-target setting), the cost of searching for additional failing ciphertexts after one or more have already been found, can be sped up dramatically. We thus demonstrate that, in this single-target model, these schemes should be designed so that it is hard to even obtain one decryption failure. Besides, in a wider security model where there are many target secret keys (multi-target setting), our attack greatly improves over the state of the art.
ResearchGate has not been able to resolve any references for this publication.